[GitHub] [hadoop] hemanthboyina commented on pull request #2390: YARN-10442. RM should make sure node label file highly available.

2020-10-29 Thread GitBox


hemanthboyina commented on pull request #2390:
URL: https://github.com/apache/hadoop/pull/2390#issuecomment-719223720


   thanks @surendralilhore  for the PR
   by default we can have the replication as 0 , if the user need to change the 
file system's default replication let the user configure so that the file 
system will set the replication



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-10-29 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r514877753



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeMap.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeMap contains information of the relationship
+ * between underlying filesystem mount and datanode volumes.
+ *
+ * This is useful when configuring block tiering on same disk mount 
(HDFS-15548)
+ * For now,
+ * we don't configure multiple volumes with same storage type on a mount.
+ */
+@InterfaceAudience.Private
+class MountVolumeMap {
+  private ConcurrentMap>
+  mountVolumeMapping;
+  private double reservedForArchive;
+
+  MountVolumeMap(Configuration conf) {
+mountVolumeMapping = new ConcurrentHashMap<>();
+reservedForArchive = conf.getDouble(
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE,
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE_DEFAULT);
+if (reservedForArchive > 1) {
+  FsDatasetImpl.LOG.warn("Value of reserve-for-archival is > 100%." +
+  " Setting it to 100%.");
+  reservedForArchive = 1;
+}
+  }
+
+  FsVolumeReference getVolumeRefByMountAndStorageType(String mount,
+  StorageType storageType) {
+if (mountVolumeMapping != null
+&& mountVolumeMapping.containsKey(mount)) {
+  try {
+VolumeInfo volumeInfo = mountVolumeMapping
+.get(mount).getOrDefault(storageType, null);
+if (volumeInfo != null) {
+  return volumeInfo.getFsVolume().obtainReference();
+}
+  } catch (ClosedChannelException e) {
+FsDatasetImpl.LOG.warn("Volume closed when getting volume" +
+" by mount and storage type: "
++ mount + ", " + storageType);
+  }
+}
+return null;
+  }
+
+  /**
+   * Return configured capacity ratio. Otherwise return 1 as default
+   */
+  double getCapacityRatioByMountAndStorageType(String mount,
+  StorageType storageType) {
+if (mountVolumeMapping != null
+&& mountVolumeMapping.containsKey(mount)) {
+  return mountVolumeMapping
+  .get(mount).getOrDefault(storageType, null).getCapacityRatio();
+}
+return 1;
+  }
+
+  void addVolume(FsVolumeImpl volume) {
+String mount = volume.getMount();
+if (!mount.isEmpty()) {
+  Map storageTypeMap =
+  mountVolumeMapping
+  .getOrDefault(mount, new ConcurrentHashMap<>());
+  if (storageTypeMap.containsKey(volume.getStorageType())) {
+FsDatasetImpl.LOG.error("Found storage type already exist." +
+" Skipping for now. Please check disk configuration");
+  } else {
+VolumeInfo volumeInfo = new VolumeInfo(volume, 1);
+if (volume.getStorageType() == StorageType.ARCHIVE) {
+  volumeInfo.setCapacityRatio(reservedForArchive);
+} else if (volume.getStorageType() == StorageType.DISK) {
+  volumeInfo.setCapacityRatio(1 - reservedForArchive);

Review comment:
   Thats a good point. I will make the change to ignore the capacity ratio 
of the volume if there is only one on the mount.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-10-29 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r514875357



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1503,6 +1503,20 @@
   public static final boolean DFS_PROTECTED_SUBDIRECTORIES_ENABLE_DEFAULT =
   false;
 
+  public static final String DFS_DATANODE_ALLOW_SAME_DISK_TIERING =
+  "dfs.datanode.same-disk-tiering.enabled";
+  public static final boolean DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT =
+  false;
+
+  // HDFS-15548 to allow DISK/ARCHIVE configured on the same disk mount.
+  // Beware that capacity usage might be >100% if there are already
+  // data blocks exist and the configured ratio is small, which will
+  // prevent the volume from taking new blocks until capacity is balanced out.
+  public static final String DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE =

Review comment:
   The intention is to have a configuration as a "default value" for all 
disks, as in normal cases one datanode server comes with the same type of HDDs. 
Therefore we can keep the DN configuration less verbose for most of the use 
cases.
   
   However, you are right that we should allow users to configure different 
values, and it is a good idea to put it under "dfs.datanode.data.dir".
   I will create a follow-up JIRA to address it, so we can keep this PR from 
being too big, as that could involve quite some change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-10-29 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r514875357



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1503,6 +1503,20 @@
   public static final boolean DFS_PROTECTED_SUBDIRECTORIES_ENABLE_DEFAULT =
   false;
 
+  public static final String DFS_DATANODE_ALLOW_SAME_DISK_TIERING =
+  "dfs.datanode.same-disk-tiering.enabled";
+  public static final boolean DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT =
+  false;
+
+  // HDFS-15548 to allow DISK/ARCHIVE configured on the same disk mount.
+  // Beware that capacity usage might be >100% if there are already
+  // data blocks exist and the configured ratio is small, which will
+  // prevent the volume from taking new blocks until capacity is balanced out.
+  public static final String DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE =

Review comment:
   The intention is to have a configuration as a "default value" for all 
disks, as in normal cases one datanode server comes with the same type of HDDs. 
Therefore we can keep the DN configuration less verbose for most of the use 
cases.
   
   However, you are right that we should allow users to configure different 
values, and it is a good idea to put it under "dfs.datanode.data.dir".
   I will create a follow-up JIRA to address it, so we can keep this PR from 
being too big, as that could involve quite a bit of change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-10-29 Thread Sally Zuo (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223396#comment-17223396
 ] 

Sally Zuo commented on HADOOP-17336:


The new patch removed the errors from checkstyle.

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-29 Thread GitBox


snvijaya commented on pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368#issuecomment-719153256


   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 89, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 66
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 89, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49
 The actual value 20 is not within the expected range: [5.60, 8.40].
   [INFO] 
   [ERROR] Tests run: 89, Failures: 1, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 89, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 249
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   Test failures reported in Yetus are not related to ABFS driver. Have created 
JIRA for tracking the failures.
   https://issues.apache.org/jira/browse/HADOOP-17325



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-10-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223370#comment-17223370
 ] 

Hadoop QA commented on HADOOP-17336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
35s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 6 new or modified 
test files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
56s{color} |  | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
44s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} |  | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} |  | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} |  | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} |  | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
55s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} |  | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} |  | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} |  | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
6s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
9s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
9s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} |  | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} |  | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 36s{color} 
| 
[/patch-unit-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/107/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt]
 | {color:red} hadoop-common in 

[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-10-29 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223321#comment-17223321
 ] 

Janus Chow commented on HADOOP-17165:
-

[~csun] Thanks for the reply.

I think _refreshCallQueue_ is a good choice for reloading the parameters since 
it is creating a new RpcScheduler and FairCallQueue inside.

After rereading the document in  HADOOP-15016, one concern comes to me is that 
users may misuse the service-user config and they may create a lot of 
service-users causing many separate queues created in FairCallQueue.

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


huangtianhua commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r514680759



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
##
@@ -89,7 +89,8 @@ public static boolean supports(final LayoutFeature f, final 
int lv) {
 APPEND_NEW_BLOCK(-62, -61, "Support appending to new block"),
 QUOTA_BY_STORAGE_TYPE(-63, -61, "Support quota for specific storage 
types"),
 ERASURE_CODING(-64, -61, "Support erasure coding"),
-EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage");
+EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage"),
+NVDIMM_SUPPORT(-66, -66, "Support NVDIMM storage type");

Review comment:
   I will change the miniCompatLV to -61, thanks for your clarification.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


huangtianhua commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r514679551



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -33,13 +33,12 @@
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public enum StorageType {
-  // sorted by the speed of the storage types, from fast to slow
   RAM_DISK(true, true),
-  NVDIMM(false, true),
   SSD(false, false),
   DISK(false, false),
   ARCHIVE(false, false),
-  PROVIDED(false, false);
+  PROVIDED(false, false),
+  NVDIMM(false, true);
 

Review comment:
   Thanks for clarification and testing. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shvachko commented on pull request #2403: HDFS-15623. Respect configured values of rpc.engine

2020-10-29 Thread GitBox


shvachko commented on pull request #2403:
URL: https://github.com/apache/hadoop/pull/2403#issuecomment-719118282


   This looks good to me



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Jing9 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-10-29 Thread GitBox


Jing9 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r514666342



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MountVolumeMap.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+
+import java.nio.channels.ClosedChannelException;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * MountVolumeMap contains information of the relationship
+ * between underlying filesystem mount and datanode volumes.
+ *
+ * This is useful when configuring block tiering on same disk mount 
(HDFS-15548)
+ * For now,
+ * we don't configure multiple volumes with same storage type on a mount.
+ */
+@InterfaceAudience.Private
+class MountVolumeMap {
+  private ConcurrentMap>
+  mountVolumeMapping;
+  private double reservedForArchive;
+
+  MountVolumeMap(Configuration conf) {
+mountVolumeMapping = new ConcurrentHashMap<>();
+reservedForArchive = conf.getDouble(
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE,
+DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE_DEFAULT);
+if (reservedForArchive > 1) {
+  FsDatasetImpl.LOG.warn("Value of reserve-for-archival is > 100%." +
+  " Setting it to 100%.");
+  reservedForArchive = 1;
+}
+  }
+
+  FsVolumeReference getVolumeRefByMountAndStorageType(String mount,
+  StorageType storageType) {
+if (mountVolumeMapping != null
+&& mountVolumeMapping.containsKey(mount)) {
+  try {
+VolumeInfo volumeInfo = mountVolumeMapping
+.get(mount).getOrDefault(storageType, null);
+if (volumeInfo != null) {
+  return volumeInfo.getFsVolume().obtainReference();
+}
+  } catch (ClosedChannelException e) {
+FsDatasetImpl.LOG.warn("Volume closed when getting volume" +
+" by mount and storage type: "
++ mount + ", " + storageType);
+  }
+}
+return null;
+  }
+
+  /**
+   * Return configured capacity ratio. Otherwise return 1 as default
+   */
+  double getCapacityRatioByMountAndStorageType(String mount,
+  StorageType storageType) {
+if (mountVolumeMapping != null
+&& mountVolumeMapping.containsKey(mount)) {
+  return mountVolumeMapping
+  .get(mount).getOrDefault(storageType, null).getCapacityRatio();
+}
+return 1;
+  }
+
+  void addVolume(FsVolumeImpl volume) {
+String mount = volume.getMount();
+if (!mount.isEmpty()) {
+  Map storageTypeMap =
+  mountVolumeMapping
+  .getOrDefault(mount, new ConcurrentHashMap<>());
+  if (storageTypeMap.containsKey(volume.getStorageType())) {
+FsDatasetImpl.LOG.error("Found storage type already exist." +
+" Skipping for now. Please check disk configuration");
+  } else {
+VolumeInfo volumeInfo = new VolumeInfo(volume, 1);
+if (volume.getStorageType() == StorageType.ARCHIVE) {
+  volumeInfo.setCapacityRatio(reservedForArchive);
+} else if (volume.getStorageType() == StorageType.DISK) {
+  volumeInfo.setCapacityRatio(1 - reservedForArchive);

Review comment:
   what if we have a mount with one single volume? Following the current 
implementation we may assign an unnecessary capacity ratio to it. We only need 
to calculate and assign the ratio for volumes sharing the same mount with 
others.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1503,6 +1503,20 @@
   public static final boolean DFS_PROTECTED_SUBDIRECTORIES_ENABLE_DEFAULT =
   false;
 
+  public static final String DFS_DATANODE_ALLOW_SAME_DISK_TIERING =
+  

[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Status: Patch Available  (was: Open)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: (was: HADOOP-17336-branch-2.10.002.patch)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: Hadoop-17336-branch-2.10.002.patch

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: Hadoop-17336-branch-2.10.001.patch

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch, 
> Hadoop-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: (was: Hadoop-17336-branch-2.10.001.patch)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: HADOOP-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Status: Open  (was: Patch Available)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: HADOOP-17336-branch-2.10.002.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: HADOOP-17336-branch-2.10.002.patch

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: HADOOP-17336-branch-2.10.002.patch, 
> Hadoop-17336-branch-2.10.001.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2423: YARN-10472. [branch-3.2.2] Backport YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client ja

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2423:
URL: https://github.com/apache/hadoop/pull/2423#issuecomment-719107330


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  15m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-3.2.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 45s |  branch-3.2.2 passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  branch-3.2.2 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  branch-3.2.2 passed  |
   | +1 :green_heart: |  shadedclient  |  44m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  branch-3.2.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   6m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 13s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 13s |  hadoop-client-runtime in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 14s |  hadoop-client-minicluster in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  86m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2423/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2423 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 373960e405c2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2.2 / df8b54b |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2423/1/testReport/ |
   | Max. process+thread count | 305 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2423/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17201) Spark job with s3acommitter stuck at the last stage

2020-10-29 Thread James Yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223273#comment-17223273
 ] 

James Yu edited comment on HADOOP-17201 at 10/29/20, 11:39 PM:
---

[~ste...@apache.org] I turned on both org.apache.hadoop.fs.s3a and S3AFileSytem 
logging with DEBUG threshold and tried several test runs, I still didn't see 
any useful logs or exception when the stuck tasks happened. Is it possible some 
inner exception was swallowed somehow?

On the other hand, I found that when setting fs.s3a.retry.limit to a lower 
number (like 1, 2, or 3),  the stuck task problem seemed to go away magically. 
I almost feel it (retrying fewer times in the inner retry) is a solution to 
this annoying problem.

By the way, I used hadoop-aws 3.2.0 and default file committer in all my tests.


was (Author: james...@ymail.com):
[~ste...@apache.org] I turned on both org.apache.hadoop.fs.s3a and S3AFileSytem 
logging with DEBUG threshold and tried several test runs, I still didn't see 
any useful logs or exception when the stuck tasks happened. Is it possible some 
inner exception was swallowed somehow?

On the other hand, I found that when setting fs.s3a.retry.limit to a lower 
number (like 1, 2, or 3),  the stuck task problem seemed to go away magically. 
I almost feel it (retrying fewer times in the inner retry) is a solution to 
this annoying problem.

By the way, I used hadoop-aws 3.2.0 in all my tests.

> Spark job with s3acommitter stuck at the last stage
> ---
>
> Key: HADOOP-17201
> URL: https://issues.apache.org/jira/browse/HADOOP-17201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: we are on spark 2.4.5/hadoop 3.2.1 with s3a committer.
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
>Reporter: Dyno
>Priority: Major
>  Labels: pull-request-available
> Attachments: exec-120.log, exec-125.log, exec-25.log, exec-31.log, 
> exec-36.log, exec-44.log, exec-5.log, exec-64.log, exec-7.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> usually our spark job took 1 hour or 2 to finish, occasionally it runs for 
> more than 3 hour and then we know it's stuck and usually the executor has 
> stack like this
> {{
> "Executor task launch worker for task 78620" #265 daemon prio=5 os_prio=0 
> tid=0x7f73e0005000 nid=0x12d waiting on condition [0x7f74cb291000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:349)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:285)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1457)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1717)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2785)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2751)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:238)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper$$Lambda$210/1059071691.execute(Unknown
>  Source)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/586859139.execute(Unknown Source)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:226)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:271)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.complete(S3ABlockOutputStream.java:660)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$200(S3ABlockOutputStream.java:521)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:385)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>   at 
> org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
>   at 
> 

[jira] [Comment Edited] (HADOOP-17201) Spark job with s3acommitter stuck at the last stage

2020-10-29 Thread James Yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223273#comment-17223273
 ] 

James Yu edited comment on HADOOP-17201 at 10/29/20, 11:35 PM:
---

[~ste...@apache.org] I turned on both org.apache.hadoop.fs.s3a and S3AFileSytem 
logging with DEBUG threshold and tried several test runs, I still didn't see 
any useful logs or exception when the stuck tasks happened. Is it possible some 
inner exception was swallowed somehow?

On the other hand, I found that when setting fs.s3a.retry.limit to a lower 
number (like 1, 2, or 3),  the stuck task problem seemed to go away magically. 
I almost feel it (retrying fewer times in the inner retry) is a solution to 
this annoying problem.

By the way, I used hadoop-aws 3.2.0 in all my tests.


was (Author: james...@ymail.com):
[~ste...@apache.org] I turned on both org.apache.hadoop.fs.s3a and S3AFileSytem 
logging with DEBUG threshold and tried several test runs, I still didn't see 
any useful logs or exception when the stuck tasks happened. Is it possible some 
inner exception was swollen somehow?

On the other hand, I found that when setting fs.s3a.retry.limit to a lower 
number (like 1, 2, or 3),  the stuck task problem seemed to go away magically. 
I almost feel it (retrying fewer times in the inner retry) is a solution to 
this annoying problem.

By the way, I used hadoop-aws 3.2.0 in all my tests.

> Spark job with s3acommitter stuck at the last stage
> ---
>
> Key: HADOOP-17201
> URL: https://issues.apache.org/jira/browse/HADOOP-17201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: we are on spark 2.4.5/hadoop 3.2.1 with s3a committer.
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
>Reporter: Dyno
>Priority: Major
>  Labels: pull-request-available
> Attachments: exec-120.log, exec-125.log, exec-25.log, exec-31.log, 
> exec-36.log, exec-44.log, exec-5.log, exec-64.log, exec-7.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> usually our spark job took 1 hour or 2 to finish, occasionally it runs for 
> more than 3 hour and then we know it's stuck and usually the executor has 
> stack like this
> {{
> "Executor task launch worker for task 78620" #265 daemon prio=5 os_prio=0 
> tid=0x7f73e0005000 nid=0x12d waiting on condition [0x7f74cb291000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:349)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:285)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1457)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1717)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2785)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2751)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:238)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper$$Lambda$210/1059071691.execute(Unknown
>  Source)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/586859139.execute(Unknown Source)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:226)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:271)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.complete(S3ABlockOutputStream.java:660)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$200(S3ABlockOutputStream.java:521)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:385)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>   at 
> org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>   at 

[jira] [Commented] (HADOOP-17201) Spark job with s3acommitter stuck at the last stage

2020-10-29 Thread James Yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223273#comment-17223273
 ] 

James Yu commented on HADOOP-17201:
---

[~ste...@apache.org] I turned on both org.apache.hadoop.fs.s3a and S3AFileSytem 
logging with DEBUG threshold and tried several test runs, I still didn't see 
any useful logs or exception when the stuck tasks happened. Is it possible some 
inner exception was swollen somehow?

On the other hand, I found that when setting fs.s3a.retry.limit to a lower 
number (like 1, 2, or 3),  the stuck task problem seemed to go away magically. 
I almost feel it (retrying fewer times in the inner retry) is a solution to 
this annoying problem.

By the way, I used hadoop-aws 3.2.0 in all my tests.

> Spark job with s3acommitter stuck at the last stage
> ---
>
> Key: HADOOP-17201
> URL: https://issues.apache.org/jira/browse/HADOOP-17201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: we are on spark 2.4.5/hadoop 3.2.1 with s3a committer.
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
>Reporter: Dyno
>Priority: Major
>  Labels: pull-request-available
> Attachments: exec-120.log, exec-125.log, exec-25.log, exec-31.log, 
> exec-36.log, exec-44.log, exec-5.log, exec-64.log, exec-7.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> usually our spark job took 1 hour or 2 to finish, occasionally it runs for 
> more than 3 hour and then we know it's stuck and usually the executor has 
> stack like this
> {{
> "Executor task launch worker for task 78620" #265 daemon prio=5 os_prio=0 
> tid=0x7f73e0005000 nid=0x12d waiting on condition [0x7f74cb291000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:349)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:285)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1457)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1717)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2785)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2751)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:238)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper$$Lambda$210/1059071691.execute(Unknown
>  Source)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/586859139.execute(Unknown Source)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:226)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:271)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.complete(S3ABlockOutputStream.java:660)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$200(S3ABlockOutputStream.java:521)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:385)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>   at 
> org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>   at 
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>   at 
> 

[GitHub] [hadoop] smengcl opened a new pull request #2423: YARN-10472. Backport YARN-10314. YarnClient throws NoClassDefFoundErr…

2020-10-29 Thread GitBox


smengcl opened a new pull request #2423:
URL: https://github.com/apache/hadoop/pull/2423


   Cherry-picking https://github.com/apache/hadoop/pull/2412 from `branch-3.2` 
to `branch-3.2.2`.
   
   Submitting this PR to trigger CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl merged pull request #2412: YARN-10472. [branch-3.2] Backport YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


smengcl merged pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2412: YARN-10472. [branch-3.2] Backport YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


smengcl commented on pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412#issuecomment-719076242


   Thanks @jojochuang for reviewing. Will merge this shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Description: 
Backport:

 HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10

HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
followup to abfs close() fix."

from branch-3.2 to branch-2.10

  was:
Backport:

 HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10

HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
followup to abfs close() fix."

to branch-2.10


> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> from branch-3.2 to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-10-29 Thread Sally Zuo (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223194#comment-17223194
 ] 

Sally Zuo commented on HADOOP-17336:


The two failing tests seem to be unrelated to this patch. All hadoop-azure test 
passed.

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506385
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 20:27
Start Date: 29/Oct/20 20:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-719002193


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 16s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 55s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 41s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 182m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2396 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 4f592c5ad05f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f17e067d527 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/3/testReport/ |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-719002193


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 30s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 16s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 55s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 41s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 182m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2396 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 4f592c5ad05f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f17e067d527 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/3/testReport/ |
   | Max. process+thread count | 1946 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.

[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=506382=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506382
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 20:22
Start Date: 29/Oct/20 20:22
Worklog Time Spent: 10m 
  Work Description: billierinaldi commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-718999512


   Thanks for reviewing again, @steveloughran! I appreciate your helpful 
comments and will look into those.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506382)
Time Spent: 50m  (was: 40m)

> ABFS: Support single writer dirs
> 
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] billierinaldi commented on pull request #1925: HADOOP-16948. Support single writer dirs.

2020-10-29 Thread GitBox


billierinaldi commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-718999512


   Thanks for reviewing again, @steveloughran! I appreciate your helpful 
comments and will look into those.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs

2020-10-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223182#comment-17223182
 ] 

Hadoop QA commented on HADOOP-17336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
12s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 6 new or modified 
test files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
52s{color} |  | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
41s{color} |  | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
14s{color} |  | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} |  | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} |  | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} |  | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
56s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} |  | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} |  | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} |  | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
5s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
12s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
12s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 55s{color} | 
[/results-checkstyle-root.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/106/artifact/out/results-checkstyle-root.txt]
 | {color:orange} root: The patch generated 4 new + 94 unchanged - 0 fixed = 98 
total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} |  | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} |  | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 37s{color} 
| 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2421: WIP. HDFS-15643. cleanup the objects in teardown

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2421:
URL: https://github.com/apache/hadoop/pull/2421#issuecomment-718955616


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  37m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  26m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |  34m 27s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  37m 33s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  24m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  21m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m 22s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  20m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |  37m 52s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 615m  2s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2421/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 991m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerResync |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
   |   | hadoop.security.TestLdapGroupsMapping |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | 
hadoop.hdfs.server.blockmanagement.TestAvailableSpaceRackFaultTolerantBPP |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.TestFileCreationDelete |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | 

[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: Hadoop-17336-branch-2.10.001.patch
Status: Patch Available  (was: In Progress)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17336) Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. followup to abfs cl

2020-10-29 Thread Sally Zuo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sally Zuo updated HADOOP-17336:
---
Attachment: (was: Hadoop-17336-branch-2.10.001.patch)

> Backport HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" and 
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix." to branch-2.10
> --
>
> Key: HADOOP-17336
> URL: https://issues.apache.org/jira/browse/HADOOP-17336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.10.1
>Reporter: Sally Zuo
>Assignee: Sally Zuo
>Priority: Major
> Attachments: Hadoop-17336-branch-2.10.001.patch
>
>
> Backport:
>  HADOOP-16005-"NativeAzureFileSystem does not support setXAttr" to branch-2.10
> HADOOP-16785. "Improve wasb and abfs resilience on double close() calls. 
> followup to abfs close() fix."
> to branch-2.10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17335) s3a listing operation will fail in async prefetch if fs closed

2020-10-29 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223088#comment-17223088
 ] 

Steve Loughran commented on HADOOP-17335:
-

(this was in an app which was doing listFiles() on landsat-pds and closed the 
FS. I don't know how replicable the failure is.

> s3a listing operation will fail in async prefetch if fs closed
> --
>
> Key: HADOOP-17335
> URL: https://issues.apache.org/jira/browse/HADOOP-17335
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Major
>
> The async prefetch logic in the S3A listing code gets into trouble if the FS 
> closed and there was an async listing in progress. 
> In this situation we should think about recognising and converting into some 
> FS-is-closed exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=506323=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506323
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 17:56
Start Date: 29/Oct/20 17:56
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#1925:
URL: https://github.com/apache/hadoop/pull/1925#discussion_r514450156



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -243,6 +252,16 @@ public String getPrimaryGroup() {
 
   @Override
   public void close() throws IOException {
+for (SelfRenewingLease lease : leaseRefs.keySet()) {

Review comment:
   This likely to take time? I'm worried about what happens if there's 
network problems and this gets invoked. Ideally this would be done in parallel, 
but abfs doesnt (yet) have a thread pool

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SelfRenewingLease.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import com.google.common.base.Preconditions;

Review comment:
   as well as the usual import grouping/ordering, we've gone to shaded 
guava on trunk

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -685,10 +712,38 @@ public OutputStream openFileForWrite(final Path path, 
final FileSystem.Statistic
   statistics,
   relativePath,
   offset,
-  populateAbfsOutputStreamContext(isAppendBlob));
+  leaseRefs,
+  populateAbfsOutputStreamContext(isAppendBlob, enableSingleWriter));
 }
   }
 
+  public String acquireLease(final Path path, final int duration) throws 
AzureBlobFileSystemException {

Review comment:
   go on, add some javadocs

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SelfRenewingLease.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import static 
org.apache.hadoop.fs.azurebfs.services.AbfsErrors.ERR_ACQUIRING_LEASE;
+
+/**
+ * An Azure blob lease that automatically renews itself indefinitely
+ * using a background thread. Use it to synchronize distributed processes,
+ * or to prevent writes to the blob by other processes that don't
+ * have the lease.
+ *
+ * Creating a new Lease object blocks the caller until the Azure blob lease is
+ * acquired.
+ *
+ * Call free() to release the Lease.
+ *
+ * You can use this Lease like a distributed lock. If the holder process
+ * dies, the lease will time out since it won't be renewed.
+ *
+ * See also {@link 

[GitHub] [hadoop] steveloughran commented on a change in pull request #1925: HADOOP-16948. Support single writer dirs.

2020-10-29 Thread GitBox


steveloughran commented on a change in pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#discussion_r514450156



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -243,6 +252,16 @@ public String getPrimaryGroup() {
 
   @Override
   public void close() throws IOException {
+for (SelfRenewingLease lease : leaseRefs.keySet()) {

Review comment:
   This likely to take time? I'm worried about what happens if there's 
network problems and this gets invoked. Ideally this would be done in parallel, 
but abfs doesnt (yet) have a thread pool

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SelfRenewingLease.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import com.google.common.base.Preconditions;

Review comment:
   as well as the usual import grouping/ordering, we've gone to shaded 
guava on trunk

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -685,10 +712,38 @@ public OutputStream openFileForWrite(final Path path, 
final FileSystem.Statistic
   statistics,
   relativePath,
   offset,
-  populateAbfsOutputStreamContext(isAppendBlob));
+  leaseRefs,
+  populateAbfsOutputStreamContext(isAppendBlob, enableSingleWriter));
 }
   }
 
+  public String acquireLease(final Path path, final int duration) throws 
AzureBlobFileSystemException {

Review comment:
   go on, add some javadocs

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SelfRenewingLease.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AzureBlobFileSystemException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import static 
org.apache.hadoop.fs.azurebfs.services.AbfsErrors.ERR_ACQUIRING_LEASE;
+
+/**
+ * An Azure blob lease that automatically renews itself indefinitely
+ * using a background thread. Use it to synchronize distributed processes,
+ * or to prevent writes to the blob by other processes that don't
+ * have the lease.
+ *
+ * Creating a new Lease object blocks the caller until the Azure blob lease is
+ * acquired.
+ *
+ * Call free() to release the Lease.
+ *
+ * You can use this Lease like a distributed lock. If the holder process
+ * dies, the lease will time out since it won't be renewed.
+ *
+ * See also {@link org.apache.hadoop.fs.azure.SelfRenewingLease}.
+ */
+public final class SelfRenewingLease {
+
+  private final AbfsClient client;
+  private final Path path;
+  private Thread renewer;
+  private volatile boolean leaseFreed;
+  private String leaseID = null;
+  private static final int LEASE_TIMEOUT = 60;  // Lease timeout in seconds
+
+  // Time to wait to renew lease in milliseconds
+  public static final int LEASE_RENEWAL_PERIOD = 4;
+  public static final 

[GitHub] [hadoop] akiyamaneko removed a comment on pull request #2401: YARN-10469. YARN-UI2 The accuracy of the percentage values in the same chart on the YARN 'Cluster OverView' page are inconsisten

2020-10-29 Thread GitBox


akiyamaneko removed a comment on pull request #2401:
URL: https://github.com/apache/hadoop/pull/2401#issuecomment-714415749


   cc @sunilgovind 
   could you help to review it? thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=506311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506311
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 17:38
Start Date: 29/Oct/20 17:38
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2422:
URL: https://github.com/apache/hadoop/pull/2422#discussion_r514443770



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -256,7 +256,9 @@ private boolean executeHttpOperation(final int retryCount) 
throws AzureBlobFileS
   }
 } catch (IOException ex) {
   if (ex instanceof UnknownHostException) {
-LOG.warn(String.format("Unknown host name: %s. Retrying to resolve the 
host name...", httpOperation.getUrl().getHost()));
+LOG.warn(String.format(
+"Unknown host name: %s. Retrying to resolve the host name...",
+httpOperation.getHost()));

Review comment:
   While you are there
   * add a catch for UnknownHostException
   * move from String.format to Log.warn("unknown host {}", 
httpOperation,getHost()
   

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();

Review comment:
   This is complicated enough it could be pulled out into a static method, 
and so its handling fully tested in (new) Unit tests, as well as in the ITests.

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
##
@@ -381,4 +383,39 @@ public void testProperties() throws Exception {
 
 assertArrayEquals(propertyValue, fs.getXAttr(reqPath, propertyName));
   }
+
+  @Test
+  public void testSignatureMask() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+String src = "/testABC/test.xt";
+fs.create(new Path(src));
+AbfsRestOperation abfsHttpRestOperation = fs.getAbfsClient()
+.renamePath(src, "/testABC" + "/abc.txt", null);
+AbfsHttpOperation result = abfsHttpRestOperation.getResult();
+String url = result.getSignatureMaskedUrlStr();
+String encodedUrl = result.getSignatureMaskedEncodedUrlStr();
+Assertions.assertThat(url.substring(url.indexOf("sig=")))
+.describedAs("Signature query param should be masked")
+.startsWith("sig=");
+Assertions.assertThat(encodedUrl.substring(encodedUrl.indexOf("sig%3D")))
+.describedAs("Signature query param should be masked")
+.startsWith("sig%3D");
+  }
+
+  @Test
+  public void testSignatureMaskOnExceptionMessage() {
+final AzureBlobFileSystem fs;
+String msg = null;
+try {
+  fs = getFileSystem();

Review comment:
   use LambdaTestUtils.intercept(). Not only simpler, it will (correctly) 
fail if the rest operation didn't actually raise an exception

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -36,6 +36,7 @@
 import org.codehaus.jackson.JsonParser;
 import org.codehaus.jackson.JsonToken;
 import org.codehaus.jackson.map.ObjectMapper;
+import com.google.common.annotations.VisibleForTesting;

Review comment:
   now all shaded I'm afraid. Making backporting harder already





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506311)
Time Spent: 50m  (was: 40m)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.




[GitHub] [hadoop] steveloughran commented on a change in pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-10-29 Thread GitBox


steveloughran commented on a change in pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#discussion_r514443770



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -256,7 +256,9 @@ private boolean executeHttpOperation(final int retryCount) 
throws AzureBlobFileS
   }
 } catch (IOException ex) {
   if (ex instanceof UnknownHostException) {
-LOG.warn(String.format("Unknown host name: %s. Retrying to resolve the 
host name...", httpOperation.getUrl().getHost()));
+LOG.warn(String.format(
+"Unknown host name: %s. Retrying to resolve the host name...",
+httpOperation.getHost()));

Review comment:
   While you are there
   * add a catch for UnknownHostException
   * move from String.format to Log.warn("unknown host {}", 
httpOperation,getHost()
   

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -513,4 +509,45 @@ private void parseListFilesResponse(final InputStream 
stream) throws IOException
   private boolean isNullInputStream(InputStream stream) {
 return stream == null ? true : false;
   }
+
+  @VisibleForTesting
+  public String getSignatureMaskedUrlStr() {
+if (this.maskedUrlStr != null) {
+  return this.maskedUrlStr;
+}
+final String urlStr = url.toString();

Review comment:
   This is complicated enough it could be pulled out into a static method, 
and so its handling fully tested in (new) Unit tests, as well as in the ITests.

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
##
@@ -381,4 +383,39 @@ public void testProperties() throws Exception {
 
 assertArrayEquals(propertyValue, fs.getXAttr(reqPath, propertyName));
   }
+
+  @Test
+  public void testSignatureMask() throws Exception {
+final AzureBlobFileSystem fs = getFileSystem();
+String src = "/testABC/test.xt";
+fs.create(new Path(src));
+AbfsRestOperation abfsHttpRestOperation = fs.getAbfsClient()
+.renamePath(src, "/testABC" + "/abc.txt", null);
+AbfsHttpOperation result = abfsHttpRestOperation.getResult();
+String url = result.getSignatureMaskedUrlStr();
+String encodedUrl = result.getSignatureMaskedEncodedUrlStr();
+Assertions.assertThat(url.substring(url.indexOf("sig=")))
+.describedAs("Signature query param should be masked")
+.startsWith("sig=");
+Assertions.assertThat(encodedUrl.substring(encodedUrl.indexOf("sig%3D")))
+.describedAs("Signature query param should be masked")
+.startsWith("sig%3D");
+  }
+
+  @Test
+  public void testSignatureMaskOnExceptionMessage() {
+final AzureBlobFileSystem fs;
+String msg = null;
+try {
+  fs = getFileSystem();

Review comment:
   use LambdaTestUtils.intercept(). Not only simpler, it will (correctly) 
fail if the rest operation didn't actually raise an exception

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -36,6 +36,7 @@
 import org.codehaus.jackson.JsonParser;
 import org.codehaus.jackson.JsonToken;
 import org.codehaus.jackson.map.ObjectMapper;
+import com.google.common.annotations.VisibleForTesting;

Review comment:
   now all shaded I'm afraid. Making backporting harder already





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-10-29 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223073#comment-17223073
 ] 

Chao Sun commented on HADOOP-17165:
---

[~Symious], yes like discussed in the previous comments, we still need 
HADOOP-15016 for a complete solution of this case. The POC patch there 
basically allow ppl to configure separate queues with different weights for 
service users. We may need to think how to allow admin to dynamically adjust 
that (we may reuse the {{refreshCallQueue}} command), as well as provide admins 
hints on how to what weights to assign for each queue (perhaps through metrics).

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506293=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506293
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 17:06
Start Date: 29/Oct/20 17:06
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-718893014


   > Is there particular reasoning for "64" as the default value?
   
   No. I wondered whether to make it smaller or just leave large. 
   
   Large: no visible impact of this change anywhere, so lowest risk
   Small: should always speed up applications which always use it.
   
   I think I should add a mention of this in the s3a performance doc, so it's 
not forgotten about



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506293)
Time Spent: 2.5h  (was: 2h 20m)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> A recurrent problem in processes with many worker threads (hive, spark etc) 
> is that calling `FileSystem.get(URI-to-object-store)` triggers the creation 
> and then discard of many FS clients -all but one for the same URL. As well as 
> the direct performance hit, this can exacerbate locking problems and make 
> instantiation a lot slower than it would otherwise be.
> This has been observed with the S3A and ABFS connectors.
> The ultimate solution here would probably be something more complicated to 
> ensure that only one thread was ever creating a connector for a given URL 
> -the rest would wait for it to be initialized. This would (a) reduce 
> contention & CPU, IO network load, and (b) reduce the time for all but the 
> first thread to resume processing to that of the remaining time in 
> .initialize(). This would also benefit the S3A connector.
> We'd need something like
> # A (per-user) map of filesystems being created 
> # split createFileSystem into two: instantiateFileSystem and 
> initializeFileSystem
> # each thread to instantiate the FS, put() it into the new map
> # If there was one already, discard the old one and wait for the new one to 
> be ready via a call to Object.wait()
> # If there wasn't an entry, call initializeFileSystem) and then, finally, 
> call Object.notifyAll(), and move it from the map of filesystems being 
> initialized to the map of created filesystems
> This sounds too straightforward to be that simple; the troublespots are 
> probably related to race conditions moving entries between the two maps and 
> making sure that no thread will block on the FS being initialized while it 
> has already been initialized (and so wait() will block forever).
> Rather than seek perfection, it may be safest go for a best-effort 
> optimisation of the #of FS instances created/initialized. That is: its better 
> to maybe create a few more FS instances than needed than it is to block 
> forever.
> Something is doable here, it's just not quick-and-dirty. Testing will be 
> "fun"; probably best to isolate this new logic somewhere where we can 
> simulate slow starts on one thread with many other threads waiting for it.
> A simpler option would be to have a lock on the construction process: only 
> one FS can be instantiated per user at a a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-718893014


   > Is there particular reasoning for "64" as the default value?
   
   No. I wondered whether to make it smaller or just leave large. 
   
   Large: no visible impact of this change anywhere, so lowest risk
   Small: should always speed up applications which always use it.
   
   I think I should add a mention of this in the s3a performance doc, so it's 
not forgotten about



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-718889722


   ```
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:3512:
  checkArgument(permits > 0 , "Invalid value of %s: %s",:33: ',' is 
preceded with whitespace. [NoWhitespaceBefore]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:3553:
  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",: Line is longer than 80 characters (found 83). [LineLength]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCaching.java:465:
private static final Semaphore sem = new Semaphore(1);:36: Name 'sem' must 
match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506291
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 17:01
Start Date: 29/Oct/20 17:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-718889722


   ```
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:3512:
  checkArgument(permits > 0 , "Invalid value of %s: %s",:33: ',' is 
preceded with whitespace. [NoWhitespaceBefore]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:3553:
  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",: Line is longer than 80 characters (found 83). [LineLength]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCaching.java:465:
private static final Semaphore sem = new Semaphore(1);:36: Name 'sem' must 
match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506291)
Time Spent: 2h 20m  (was: 2h 10m)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> A recurrent problem in processes with many worker threads (hive, spark etc) 
> is that calling `FileSystem.get(URI-to-object-store)` triggers the creation 
> and then discard of many FS clients -all but one for the same URL. As well as 
> the direct performance hit, this can exacerbate locking problems and make 
> instantiation a lot slower than it would otherwise be.
> This has been observed with the S3A and ABFS connectors.
> The ultimate solution here would probably be something more complicated to 
> ensure that only one thread was ever creating a connector for a given URL 
> -the rest would wait for it to be initialized. This would (a) reduce 
> contention & CPU, IO network load, and (b) reduce the time for all but the 
> first thread to resume processing to that of the remaining time in 
> .initialize(). This would also benefit the S3A connector.
> We'd need something like
> # A (per-user) map of filesystems being created 
> # split createFileSystem into two: instantiateFileSystem and 
> initializeFileSystem
> # each thread to instantiate the FS, put() it into the new map
> # If there was one already, discard the old one and wait for the new one to 
> be ready via a call to Object.wait()
> # If there wasn't an entry, call initializeFileSystem) and then, finally, 
> call Object.notifyAll(), and move it from the map of filesystems being 
> initialized to the map of created filesystems
> This sounds too straightforward to be that simple; the troublespots are 
> probably related to race conditions moving entries between the two maps and 
> making sure that no thread will block on the FS being initialized while it 
> has already been initialized (and so wait() will block forever).
> Rather than seek perfection, it may be safest go for a best-effort 
> optimisation of the #of FS instances created/initialized. That is: its better 
> to maybe create a few more FS instances than needed than it is to block 
> forever.
> Something is doable here, it's just not quick-and-dirty. Testing will be 
> "fun"; probably best to isolate this new logic somewhere where we can 
> simulate slow starts on one thread with many other threads waiting for it.
> A simpler option would be to have a lock on the construction process: only 
> one FS can be instantiated per user at a a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506282=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506282
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 16:49
Start Date: 29/Oct/20 16:49
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514412041



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -3517,33 +3546,86 @@ private FileSystem getInternal(URI uri, Configuration 
conf, Key key)
   if (fs != null) {
 return fs;
   }
-
-  fs = createFileSystem(uri, conf);
-  final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
-  SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
-  synchronized (this) { // refetch the lock again
-FileSystem oldfs = map.get(key);
-if (oldfs != null) { // a file system is created while lock is 
releasing
-  fs.close(); // close the new file system
-  return oldfs;  // return the old file system
-}
-
-// now insert the new file system into the map
-if (map.isEmpty()
-&& !ShutdownHookManager.get().isShutdownInProgress()) {
-  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
-  SHUTDOWN_HOOK_PRIORITY, timeout,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
+  // fs not yet created, acquire lock
+  // to construct an instance.
+  try (DurationInfo d =
+  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",
+  uri)) {
+creatorPermits.acquire();
+  } catch (InterruptedException e) {
+// acquisition was interrupted; convert to an IOE.
+throw (IOException)new InterruptedIOException(e.toString())
+.initCause(e);
+  }
+  FileSystem fsToClose = null;
+  try {
+// See if FS was instantiated by another thread while waiting
+// for the permit.
+synchronized (this) {
+  fs = map.get(key);
 }
-fs.key = key;
-map.put(key, fs);
-if (conf.getBoolean(
-FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
-  toAutoClose.add(key);
+if (fs != null) {
+  LOGGER.debug("Filesystem {} created while awaiting semaphore", uri);
+  return fs;
 }
-return fs;
+// create the filesystem
+fs = createFileSystem(uri, conf);
+final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
+SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
+ShutdownHookManager.TIME_UNIT_DEFAULT);
+// any FS to close outside of the synchronized section
+synchronized (this) { // lock on the Cache object
+
+  // see if there is now an entry for the FS, which happens
+  // if another thread's creation overlapped with this one.
+  FileSystem oldfs = map.get(key);
+  if (oldfs != null) {
+// a file system was created in a separate thread.
+// save the FS reference to close outside all locks,
+// and switch to returning the oldFS
+fsToClose = fs;
+fs = oldfs;
+  } else {
+// register the clientFinalizer if needed and shutdown isn't
+// already active
+if (map.isEmpty()
+&& !ShutdownHookManager.get().isShutdownInProgress()) {
+  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
+  SHUTDOWN_HOOK_PRIORITY, timeout,
+  ShutdownHookManager.TIME_UNIT_DEFAULT);
+}
+// insert the new file system into the map
+fs.key = key;
+map.put(key, fs);
+if (conf.getBoolean(
+FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
+  toAutoClose.add(key);
+}
+  }
+} // end of synchronized block
+  } finally {
+// release the creator permit.
+creatorPermits.release();
+  }
+  if (fsToClose != null) {
+LOGGER.debug("Duplicate FS created for {}; discarding {}",
+uri, fs);
+discardedInstances.incrementAndGet();
+// close the new file system
+// note this will briefly remove and reinstate "fsToClose" from
+// the map. It is done in a synchronized block so will not be
+// visible to others.
+fsToClose.close();
   }
+  return fs;
+}
+
+/**
+ * Get the count of discarded instances.
+ * @return the new instance.
+ */
+long getDiscardedInstances() {

Review comment:
   done





[GitHub] [hadoop] steveloughran commented on a change in pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


steveloughran commented on a change in pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514412041



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -3517,33 +3546,86 @@ private FileSystem getInternal(URI uri, Configuration 
conf, Key key)
   if (fs != null) {
 return fs;
   }
-
-  fs = createFileSystem(uri, conf);
-  final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
-  SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
-  synchronized (this) { // refetch the lock again
-FileSystem oldfs = map.get(key);
-if (oldfs != null) { // a file system is created while lock is 
releasing
-  fs.close(); // close the new file system
-  return oldfs;  // return the old file system
-}
-
-// now insert the new file system into the map
-if (map.isEmpty()
-&& !ShutdownHookManager.get().isShutdownInProgress()) {
-  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
-  SHUTDOWN_HOOK_PRIORITY, timeout,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
+  // fs not yet created, acquire lock
+  // to construct an instance.
+  try (DurationInfo d =
+  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",
+  uri)) {
+creatorPermits.acquire();
+  } catch (InterruptedException e) {
+// acquisition was interrupted; convert to an IOE.
+throw (IOException)new InterruptedIOException(e.toString())
+.initCause(e);
+  }
+  FileSystem fsToClose = null;
+  try {
+// See if FS was instantiated by another thread while waiting
+// for the permit.
+synchronized (this) {
+  fs = map.get(key);
 }
-fs.key = key;
-map.put(key, fs);
-if (conf.getBoolean(
-FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
-  toAutoClose.add(key);
+if (fs != null) {
+  LOGGER.debug("Filesystem {} created while awaiting semaphore", uri);
+  return fs;
 }
-return fs;
+// create the filesystem
+fs = createFileSystem(uri, conf);
+final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
+SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
+ShutdownHookManager.TIME_UNIT_DEFAULT);
+// any FS to close outside of the synchronized section
+synchronized (this) { // lock on the Cache object
+
+  // see if there is now an entry for the FS, which happens
+  // if another thread's creation overlapped with this one.
+  FileSystem oldfs = map.get(key);
+  if (oldfs != null) {
+// a file system was created in a separate thread.
+// save the FS reference to close outside all locks,
+// and switch to returning the oldFS
+fsToClose = fs;
+fs = oldfs;
+  } else {
+// register the clientFinalizer if needed and shutdown isn't
+// already active
+if (map.isEmpty()
+&& !ShutdownHookManager.get().isShutdownInProgress()) {
+  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
+  SHUTDOWN_HOOK_PRIORITY, timeout,
+  ShutdownHookManager.TIME_UNIT_DEFAULT);
+}
+// insert the new file system into the map
+fs.key = key;
+map.put(key, fs);
+if (conf.getBoolean(
+FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
+  toAutoClose.add(key);
+}
+  }
+} // end of synchronized block
+  } finally {
+// release the creator permit.
+creatorPermits.release();
+  }
+  if (fsToClose != null) {
+LOGGER.debug("Duplicate FS created for {}; discarding {}",
+uri, fs);
+discardedInstances.incrementAndGet();
+// close the new file system
+// note this will briefly remove and reinstate "fsToClose" from
+// the map. It is done in a synchronized block so will not be
+// visible to others.
+fsToClose.close();
   }
+  return fs;
+}
+
+/**
+ * Get the count of discarded instances.
+ * @return the new instance.
+ */
+long getDiscardedInstances() {

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[GitHub] [hadoop] steveloughran commented on a change in pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


steveloughran commented on a change in pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514411189



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCaching.java
##
@@ -336,4 +344,134 @@ public void testCacheIncludesURIUserInfo() throws 
Throwable {
 assertNotEquals(keyA, new FileSystem.Cache.Key(
 new URI("wasb://a:passw...@account.blob.core.windows.net"), conf));
   }
+
+
+  /**
+   * Single semaphore: no surplus FS instances will be created
+   * and then discarded.
+   */
+  @Test
+  public void testCacheSingleSemaphoredConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(1);
+createFileSystems(cache, 10);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(0);
+  }
+
+  /**
+   * Dual semaphore: thread 2 will get as far as
+   * blocking in the initialize() method while awaiting
+   * thread 1 to complete its initialization.
+   * 
+   * The thread 2 FS instance will be discarded.
+   * All other threads will block for a cache semaphore,
+   * so when they are given an opportunity to proceed,
+   * they will find that an FS instance exists.
+   */
+  @Test
+  public void testCacheDualSemaphoreConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(2);
+createFileSystems(cache, 10);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(1);
+  }
+
+  /**
+   * Construct the FS instances in a cache with effectively no
+   * limit on the number of instances which can be created
+   * simultaneously.
+   * 
+   * This is the effective state before HADOOP-17313.
+   * 
+   * All but one thread's FS instance will be discarded.
+   */
+  @Test
+  public void testCacheLargeSemaphoreConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(999);
+int count = 10;
+createFileSystems(cache, count);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(count -1);
+  }
+
+  /**
+   * Create a cache with a given semaphore size.
+   * @param semaphores number of semaphores
+   * @return the cache.
+   */
+  private FileSystem.Cache semaphoredCache(final int semaphores) {
+final Configuration conf1 = new Configuration();
+conf1.setInt(FS_CREATION_PARALLEL_COUNT, semaphores);
+FileSystem.Cache cache = new FileSystem.Cache(conf1);
+return cache;
+  }
+
+  /**
+   * Attempt to create {@code count} filesystems in parallel,
+   * then assert that they are all equal.
+   * @param cache cache to use
+   * @param count count of filesystems to instantiate
+   */
+  private void createFileSystems(final FileSystem.Cache cache, final int count)
+  throws URISyntaxException, InterruptedException,
+ java.util.concurrent.ExecutionException {
+final Configuration conf = new Configuration();
+conf.set("fs.blocking.impl", BlockingInitializer.NAME);
+// only one instance can be created at a time.
+URI uri = new URI("blocking://a");
+ListeningExecutorService pool =
+BlockingThreadPoolExecutorService.newInstance(count * 2, 0,
+10, TimeUnit.SECONDS,
+"creation-threads");
+
+// submit a set of requests to create an FS instance.
+// the semaphore will block all but one, and that will block until
+// it is allowed to continue
+List> futures = new ArrayList<>(count);
+
+// acquire the semaphore so blocking all FS instances from
+// being initialized.
+Semaphore semaphore = BlockingInitializer.sem;
+semaphore.acquire();
+
+// su

Review comment:
   cut





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506281=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506281
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 16:48
Start Date: 29/Oct/20 16:48
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514411189



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCaching.java
##
@@ -336,4 +344,134 @@ public void testCacheIncludesURIUserInfo() throws 
Throwable {
 assertNotEquals(keyA, new FileSystem.Cache.Key(
 new URI("wasb://a:passw...@account.blob.core.windows.net"), conf));
   }
+
+
+  /**
+   * Single semaphore: no surplus FS instances will be created
+   * and then discarded.
+   */
+  @Test
+  public void testCacheSingleSemaphoredConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(1);
+createFileSystems(cache, 10);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(0);
+  }
+
+  /**
+   * Dual semaphore: thread 2 will get as far as
+   * blocking in the initialize() method while awaiting
+   * thread 1 to complete its initialization.
+   * 
+   * The thread 2 FS instance will be discarded.
+   * All other threads will block for a cache semaphore,
+   * so when they are given an opportunity to proceed,
+   * they will find that an FS instance exists.
+   */
+  @Test
+  public void testCacheDualSemaphoreConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(2);
+createFileSystems(cache, 10);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(1);
+  }
+
+  /**
+   * Construct the FS instances in a cache with effectively no
+   * limit on the number of instances which can be created
+   * simultaneously.
+   * 
+   * This is the effective state before HADOOP-17313.
+   * 
+   * All but one thread's FS instance will be discarded.
+   */
+  @Test
+  public void testCacheLargeSemaphoreConstruction() throws Exception {
+FileSystem.Cache cache = semaphoredCache(999);
+int count = 10;
+createFileSystems(cache, count);
+Assertions.assertThat(cache.getDiscardedInstances())
+.describedAs("Discarded FS instances")
+.isEqualTo(count -1);
+  }
+
+  /**
+   * Create a cache with a given semaphore size.
+   * @param semaphores number of semaphores
+   * @return the cache.
+   */
+  private FileSystem.Cache semaphoredCache(final int semaphores) {
+final Configuration conf1 = new Configuration();
+conf1.setInt(FS_CREATION_PARALLEL_COUNT, semaphores);
+FileSystem.Cache cache = new FileSystem.Cache(conf1);
+return cache;
+  }
+
+  /**
+   * Attempt to create {@code count} filesystems in parallel,
+   * then assert that they are all equal.
+   * @param cache cache to use
+   * @param count count of filesystems to instantiate
+   */
+  private void createFileSystems(final FileSystem.Cache cache, final int count)
+  throws URISyntaxException, InterruptedException,
+ java.util.concurrent.ExecutionException {
+final Configuration conf = new Configuration();
+conf.set("fs.blocking.impl", BlockingInitializer.NAME);
+// only one instance can be created at a time.
+URI uri = new URI("blocking://a");
+ListeningExecutorService pool =
+BlockingThreadPoolExecutorService.newInstance(count * 2, 0,
+10, TimeUnit.SECONDS,
+"creation-threads");
+
+// submit a set of requests to create an FS instance.
+// the semaphore will block all but one, and that will block until
+// it is allowed to continue
+List> futures = new ArrayList<>(count);
+
+// acquire the semaphore so blocking all FS instances from
+// being initialized.
+Semaphore semaphore = BlockingInitializer.sem;
+semaphore.acquire();
+
+// su

Review comment:
   cut





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506281)
Time Spent: 2h  (was: 1h 50m)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  

[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


vinayakumarb commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r514403558



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
##
@@ -89,7 +89,8 @@ public static boolean supports(final LayoutFeature f, final 
int lv) {
 APPEND_NEW_BLOCK(-62, -61, "Support appending to new block"),
 QUOTA_BY_STORAGE_TYPE(-63, -61, "Support quota for specific storage 
types"),
 ERASURE_CODING(-64, -61, "Support erasure coding"),
-EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage");
+EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage"),
+NVDIMM_SUPPORT(-66, -66, "Support NVDIMM storage type");

Review comment:
   keep it `-61` for` minCompatLV` itself.
   More details 
[here](https://github.com/apache/hadoop/pull/2377#pullrequestreview-518282561)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


vinayakumarb commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r514401987



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -33,13 +33,12 @@
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public enum StorageType {
-  // sorted by the speed of the storage types, from fast to slow
   RAM_DISK(true, true),
-  NVDIMM(false, true),
   SSD(false, false),
   DISK(false, false),
   ARCHIVE(false, false),
-  PROVIDED(false, false);
+  PROVIDED(false, false),
+  NVDIMM(false, true);
 

Review comment:
   I have verified the `getStoragePolicies()` with older clients.
   Older clients will get the storage policy but, unknown storage types will be 
ignored.
   So in this case, ALLNVDIMM storage policy shows empty StorageTypes for older 
clients.
   
   May be need to show DEFAULT StorageType instead of ignoring the unknown 
StorageTypes. This also can be fixed in a separate Jira. Right now, existing 
clients wont be broken on` getStoragePolicies()` call with this change.
   
   So nothing special required for that in this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


vinayakumarb commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-718871538


   > I think, you can hold on till HDFS-15660 addressed.
   
   I dont think need to hold this Jira, unless HDFS-15660 cant be solved in the 
same release as NVDIMM feature altogether.
   Once all comments of this jira are handled, can push this in. 
   HDFS-15660 will support handling of both PROVIDED and NVDIMM storage types 
for older clients in a generic way.
   
   So both can go independently but need to make sure both lands in same 
release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


vinayakumarb commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r513143383



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -3569,6 +3569,9 @@ void setQuota(String src, long nsQuota, long ssQuota, 
StorageType type)
 if (type != null) {
   requireEffectiveLayoutVersionForFeature(Feature.QUOTA_BY_STORAGE_TYPE);
 }
+if (type == StorageType.NVDIMM) {
+  requireEffectiveLayoutVersionForFeature(Feature.NVDIMM_SUPPORT);
+}

Review comment:
   Similar check you need to add when user tries to use NVDIMM based 
storage policy.. 

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -33,13 +33,12 @@
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public enum StorageType {
-  // sorted by the speed of the storage types, from fast to slow
   RAM_DISK(true, true),
-  NVDIMM(false, true),
   SSD(false, false),
   DISK(false, false),
   ARCHIVE(false, false),
-  PROVIDED(false, false);
+  PROVIDED(false, false),
+  NVDIMM(false, true);
 

Review comment:
   setQuota() check will only block during rollingupgrade. But once its 
finalized, still old clients experience failure.
   Anyway thats the current problem, even before this feature. Can be handled 
in a separate Jira.
   
   More details about failure: 2.10.1 client asking quota usage from 3.3.0 
namenode.
   ```
   $ bin/hdfs dfs -fs hdfs://namenode:8020/ -count -q -t -h /
   count: Message missing required fields: 
usage.typeQuotaInfos.typeQuotaInfo[3].type
   ```
   Above issue is coming, because 2.10.1 client doent know about PROVIDED 
StorageType.
   Similar problem will occur for NVDIMM also from previous version clients.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506263
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 16:05
Start Date: 29/Oct/20 16:05
Worklog Time Spent: 10m 
  Work Description: steveloughran edited a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718847026


   test run failure 
   
   ```
mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep -Ds3guard 
-Ddynamo  -Dfs.s3a.directory.marker.audit=true -Dscale
   ```
   
   ```
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
   [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
49.926 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode
   [ERROR] 
testRenameDirMarksDestAsAuth(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode)
  Time elapsed: 2.451 s  <<< ERROR!
   
`s3a://stevel-ireland/fork-0002/test/base/auth/testRenameDirMarksDestAsAuth/dest/subdir':
 Directory is not marked as authoritative in the S3Guard store
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.verifyAuthDir(AuthoritativeAuditOperation.java:111)
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.executeAudit(AuthoritativeAuditOperation.java:183)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.expectAuthRecursive(ITestDynamoDBMetadataStoreAuthoritativeMode.java:879)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.testRenameDirMarksDestAsAuth(ITestDynamoDBMetadataStoreAuthoritativeMode.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcu
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506263)
Time Spent: 5h 40m  (was: 5.5h)

> S3A statistics to support IOStatistics
> --
>
> Key: HADOOP-17271
> URL: https://issues.apache.org/jira/browse/HADOOP-17271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> S3A to rework statistics with
> * API + Implementation split of the interfaces used by subcomponents when 
> reporting stats
> * S3A Instrumentation to implement all the interfaces
> * streams, etc to all implement IOStatisticsSources and serve to callers
> * Add some tracking of durations of remote requests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran edited a comment on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


steveloughran edited a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718847026


   test run failure 
   
   ```
mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep -Ds3guard 
-Ddynamo  -Dfs.s3a.directory.marker.audit=true -Dscale
   ```
   
   ```
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
   [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
49.926 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode
   [ERROR] 
testRenameDirMarksDestAsAuth(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode)
  Time elapsed: 2.451 s  <<< ERROR!
   
`s3a://stevel-ireland/fork-0002/test/base/auth/testRenameDirMarksDestAsAuth/dest/subdir':
 Directory is not marked as authoritative in the S3Guard store
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.verifyAuthDir(AuthoritativeAuditOperation.java:111)
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.executeAudit(AuthoritativeAuditOperation.java:183)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.expectAuthRecursive(ITestDynamoDBMetadataStoreAuthoritativeMode.java:879)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.testRenameDirMarksDestAsAuth(ITestDynamoDBMetadataStoreAuthoritativeMode.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcu
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-10-29 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222989#comment-17222989
 ] 

Steve Loughran commented on HADOOP-16881:
-

no idea

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506256=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506256
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:54
Start Date: 29/Oct/20 15:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-717947867


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 46 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 53s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 24s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 13s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 14 new + 272 unchanged - 25 fixed = 286 total 
(was 297)  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  3s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private 

[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506257=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506257
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:54
Start Date: 29/Oct/20 15:54
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718847026


   test run failure 
   ```
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
   [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
49.926 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode
   [ERROR] 
testRenameDirMarksDestAsAuth(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode)
  Time elapsed: 2.451 s  <<< ERROR!
   
`s3a://stevel-ireland/fork-0002/test/base/auth/testRenameDirMarksDestAsAuth/dest/subdir':
 Directory is not marked as authoritative in the S3Guard store
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.verifyAuthDir(AuthoritativeAuditOperation.java:111)
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.executeAudit(AuthoritativeAuditOperation.java:183)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.expectAuthRecursive(ITestDynamoDBMetadataStoreAuthoritativeMode.java:879)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.testRenameDirMarksDestAsAuth(ITestDynamoDBMetadataStoreAuthoritativeMode.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcu
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506257)
Time Spent: 5.5h  (was: 5h 20m)

> S3A statistics to support IOStatistics
> --
>
> Key: HADOOP-17271
> URL: https://issues.apache.org/jira/browse/HADOOP-17271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> S3A to rework statistics with
> * API + Implementation split of the interfaces used by subcomponents when 
> reporting stats
> * S3A Instrumentation to implement all the interfaces
> * streams, etc to all implement IOStatisticsSources and serve to callers
> * Add some tracking of durations of remote requests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506253
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:53
Start Date: 29/Oct/20 15:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-704583811


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 41 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 24s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 47s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 18s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  19m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  19m  0s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 58s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/12/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 11 new + 267 unchanged - 25 fixed = 278 total 
(was 292)  |
   | +1 :green_heart: |  mvnsite  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 54s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 49s |  |  hadoop-common in the patch 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-717947867


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 46 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 53s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 24s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 13s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 14 new + 272 unchanged - 25 fixed = 286 total 
(was 297)  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  3s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 31 new 
+ 88 unchanged - 0 fixed = 119 total (was 88)  |
   | -1 :x: |  findbugs  |   1m 24s | 
[/new-findbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/18/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 2 new + 0 unchanged - 0 fixed = 2 total 
(was 0)  |
    _ Other Tests _ |
   | +1 :green_heart: 

[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506254=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506254
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:53
Start Date: 29/Oct/20 15:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707319515


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 18s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 52s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 57s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  8s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/16/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 
292)  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 55s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  3s |  |  hadoop-common in the patch 
passed. 

[GitHub] [hadoop] steveloughran commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


steveloughran commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718847026


   test run failure 
   ```
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
   [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
49.926 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode
   [ERROR] 
testRenameDirMarksDestAsAuth(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode)
  Time elapsed: 2.451 s  <<< ERROR!
   
`s3a://stevel-ireland/fork-0002/test/base/auth/testRenameDirMarksDestAsAuth/dest/subdir':
 Directory is not marked as authoritative in the S3Guard store
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.verifyAuthDir(AuthoritativeAuditOperation.java:111)
at 
org.apache.hadoop.fs.s3a.s3guard.AuthoritativeAuditOperation.executeAudit(AuthoritativeAuditOperation.java:183)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.expectAuthRecursive(ITestDynamoDBMetadataStoreAuthoritativeMode.java:879)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreAuthoritativeMode.testRenameDirMarksDestAsAuth(ITestDynamoDBMetadataStoreAuthoritativeMode.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcu
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707319515


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 43 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 18s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 52s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 57s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  8s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/16/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 
292)  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 55s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m  0s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 39s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 193m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


hadoop-yetus removed a comment on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-704583811


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 41 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 24s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 47s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 18s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  19m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  19m  0s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 58s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/12/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 11 new + 267 unchanged - 25 fixed = 278 total 
(was 292)  |
   | +1 :green_heart: |  mvnsite  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 54s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 49s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m 27s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 

[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=506247=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506247
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:42
Start Date: 29/Oct/20 15:42
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-718839259


   This PR won't merge right now due to the move to shaded Preconditons. The PR 
#2324 has been rebased; I'll be rebuilding this PR from that one as a trunk + 
one big fat patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506247)
Time Spent: 8.5h  (was: 8h 20m)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.

2020-10-29 Thread GitBox


steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-718839259


   This PR won't merge right now due to the move to shaded Preconditons. The PR 
#2324 has been rebased; I'll be rebuilding this PR from that one as a trunk + 
one big fat patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=506243=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506243
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 15:24
Start Date: 29/Oct/20 15:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718827022


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 12s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 46 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 54s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 16s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 43s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 12 new + 272 unchanged - 25 fixed = 284 total 
(was 297)  |
   | +1 :green_heart: |  mvnsite  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  4s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-718827022


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 12s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 46 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 54s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 16s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 43s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 12 new + 272 unchanged - 25 fixed = 284 total 
(was 297)  |
   | +1 :green_heart: |  mvnsite  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  4s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 31 new 
+ 88 unchanged - 0 fixed = 119 total (was 88)  |
   | -1 :x: |  findbugs  |   1m 25s | 
[/new-findbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/19/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 2 new + 0 unchanged - 0 fixed = 2 total 
(was 0)  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368#issuecomment-718737087


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   2m  7s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2368/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   2m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 54s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 13s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 31s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2368/6/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  51m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2368/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux fc250b1997db 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f17e067d527 |
   | Default Java | Private 

[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-10-29 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222866#comment-17222866
 ] 

Attila Magyar commented on HADOOP-16881:


Hey @[~ste...@apache.org], do you have any insights on this? There seems to be 
a socket leak in KerberosAuthentication/PseudoAuthenticator which causes lots 
of CLOSE_WAIT sockets to pile up and they're never cleared up. Adding explicit 
disconnect calls to the http client solves the issue. However it might prevent 
the connection pooling from reusing sockets.

 
{code:java}
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
index 3bfa349880c..c035dd44ce0 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -183,8 +183,9 @@ public void authenticate(URL url, AuthenticatedURL.Token 
token)
 if (!token.isSet()) {
   this.url = url;
   base64 = new Base64(0);
+  HttpURLConnection conn = null;
   try {
-HttpURLConnection conn = token.openConnection(url, connConfigurator);
+conn = token.openConnection(url, connConfigurator);
 conn.setRequestMethod(AUTH_HTTP_METHOD);
 conn.connect();
 
@@ -218,6 +219,10 @@ public void authenticate(URL url, AuthenticatedURL.Token 
token)
   } catch (AuthenticationException ex){
 throw wrapExceptionWithMessage(ex,
 "Error while authenticating with endpoint: " + url);
+  } finally {
+if (conn != null) {
+  conn.disconnect();
+}
   }
 }
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
index 4e2ee4fdbea..8546a76c1af 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
@@ -312,8 +312,9 @@ private Map doDelegationTokenOperation(URL url,
   dt = ((DelegationTokenAuthenticatedURL.Token) 
token).getDelegationToken();
   ((DelegationTokenAuthenticatedURL.Token) token).setDelegationToken(null);
 }
+HttpURLConnection conn = null;
 try {
-  HttpURLConnection conn = aUrl.openConnection(url, token);
+  conn = aUrl.openConnection(url, token);
   conn.setRequestMethod(operation.getHttpMethod());
   HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
   if (hasResponse) {
@@ -339,6 +340,9 @@ private Map doDelegationTokenOperation(URL url,
   if (dt != null) {
 ((DelegationTokenAuthenticatedURL.Token) token).setDelegationToken(dt);
   }
+  if (conn != null) {
+conn.disconnect();
+  }
 }
 return ret;
   }
 {code}

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=506179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506179
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 11:40
Start Date: 29/Oct/20 11:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718698208


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  5s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/3/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 26s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/3/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718698208


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  5s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/3/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 26s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/3/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2422 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 171fc49e0a1c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 

[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=506175=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506175
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 11:35
Start Date: 29/Oct/20 11:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718695545


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/2/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 27s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   
   
   | Subsystem | Report/Notes |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718695545


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/2/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 27s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2422 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e20026339300 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 

[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=506174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506174
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 11:29
Start Date: 29/Oct/20 11:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718692046


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 25s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 28s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   
   
   | Subsystem | Report/Notes |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422#issuecomment-718692046


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 2 unchanged - 1 fixed = 
2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 25s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 28s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to urlStr in 
org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()  At 
AbfsHttpOperation.java:org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.toString()
  At AbfsHttpOperation.java:[line 160] |
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2422/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2422 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9d1b54c16dc8 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 

[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=506157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506157
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 10:50
Start Date: 29/Oct/20 10:50
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on a change in pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514165609



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -3517,33 +3546,86 @@ private FileSystem getInternal(URI uri, Configuration 
conf, Key key)
   if (fs != null) {
 return fs;
   }
-
-  fs = createFileSystem(uri, conf);
-  final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
-  SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
-  synchronized (this) { // refetch the lock again
-FileSystem oldfs = map.get(key);
-if (oldfs != null) { // a file system is created while lock is 
releasing
-  fs.close(); // close the new file system
-  return oldfs;  // return the old file system
-}
-
-// now insert the new file system into the map
-if (map.isEmpty()
-&& !ShutdownHookManager.get().isShutdownInProgress()) {
-  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
-  SHUTDOWN_HOOK_PRIORITY, timeout,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
+  // fs not yet created, acquire lock
+  // to construct an instance.
+  try (DurationInfo d =
+  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",
+  uri)) {
+creatorPermits.acquire();
+  } catch (InterruptedException e) {
+// acquisition was interrupted; convert to an IOE.
+throw (IOException)new InterruptedIOException(e.toString())
+.initCause(e);
+  }
+  FileSystem fsToClose = null;
+  try {
+// See if FS was instantiated by another thread while waiting
+// for the permit.
+synchronized (this) {
+  fs = map.get(key);
 }
-fs.key = key;
-map.put(key, fs);
-if (conf.getBoolean(
-FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
-  toAutoClose.add(key);
+if (fs != null) {
+  LOGGER.debug("Filesystem {} created while awaiting semaphore", uri);
+  return fs;
 }
-return fs;
+// create the filesystem
+fs = createFileSystem(uri, conf);
+final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
+SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
+ShutdownHookManager.TIME_UNIT_DEFAULT);
+// any FS to close outside of the synchronized section
+synchronized (this) { // lock on the Cache object
+
+  // see if there is now an entry for the FS, which happens
+  // if another thread's creation overlapped with this one.
+  FileSystem oldfs = map.get(key);
+  if (oldfs != null) {
+// a file system was created in a separate thread.
+// save the FS reference to close outside all locks,
+// and switch to returning the oldFS
+fsToClose = fs;
+fs = oldfs;
+  } else {
+// register the clientFinalizer if needed and shutdown isn't
+// already active
+if (map.isEmpty()
+&& !ShutdownHookManager.get().isShutdownInProgress()) {
+  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
+  SHUTDOWN_HOOK_PRIORITY, timeout,
+  ShutdownHookManager.TIME_UNIT_DEFAULT);
+}
+// insert the new file system into the map
+fs.key = key;
+map.put(key, fs);
+if (conf.getBoolean(
+FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
+  toAutoClose.add(key);
+}
+  }
+} // end of synchronized block
+  } finally {
+// release the creator permit.
+creatorPermits.release();
+  }
+  if (fsToClose != null) {
+LOGGER.debug("Duplicate FS created for {}; discarding {}",
+uri, fs);
+discardedInstances.incrementAndGet();
+// close the new file system
+// note this will briefly remove and reinstate "fsToClose" from
+// the map. It is done in a synchronized block so will not be
+// visible to others.
+fsToClose.close();
   }
+  return fs;
+}
+
+/**
+ * Get the count of discarded instances.
+ * @return the new instance.
+ */
+long getDiscardedInstances() {

Review comment:
   Should we 

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-29 Thread GitBox


mehakmeet commented on a change in pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#discussion_r514165609



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -3517,33 +3546,86 @@ private FileSystem getInternal(URI uri, Configuration 
conf, Key key)
   if (fs != null) {
 return fs;
   }
-
-  fs = createFileSystem(uri, conf);
-  final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
-  SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
-  synchronized (this) { // refetch the lock again
-FileSystem oldfs = map.get(key);
-if (oldfs != null) { // a file system is created while lock is 
releasing
-  fs.close(); // close the new file system
-  return oldfs;  // return the old file system
-}
-
-// now insert the new file system into the map
-if (map.isEmpty()
-&& !ShutdownHookManager.get().isShutdownInProgress()) {
-  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
-  SHUTDOWN_HOOK_PRIORITY, timeout,
-  ShutdownHookManager.TIME_UNIT_DEFAULT);
+  // fs not yet created, acquire lock
+  // to construct an instance.
+  try (DurationInfo d =
+  new DurationInfo(LOGGER, false, "Acquiring creator semaphore for 
%s",
+  uri)) {
+creatorPermits.acquire();
+  } catch (InterruptedException e) {
+// acquisition was interrupted; convert to an IOE.
+throw (IOException)new InterruptedIOException(e.toString())
+.initCause(e);
+  }
+  FileSystem fsToClose = null;
+  try {
+// See if FS was instantiated by another thread while waiting
+// for the permit.
+synchronized (this) {
+  fs = map.get(key);
 }
-fs.key = key;
-map.put(key, fs);
-if (conf.getBoolean(
-FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
-  toAutoClose.add(key);
+if (fs != null) {
+  LOGGER.debug("Filesystem {} created while awaiting semaphore", uri);
+  return fs;
 }
-return fs;
+// create the filesystem
+fs = createFileSystem(uri, conf);
+final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
+SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
+ShutdownHookManager.TIME_UNIT_DEFAULT);
+// any FS to close outside of the synchronized section
+synchronized (this) { // lock on the Cache object
+
+  // see if there is now an entry for the FS, which happens
+  // if another thread's creation overlapped with this one.
+  FileSystem oldfs = map.get(key);
+  if (oldfs != null) {
+// a file system was created in a separate thread.
+// save the FS reference to close outside all locks,
+// and switch to returning the oldFS
+fsToClose = fs;
+fs = oldfs;
+  } else {
+// register the clientFinalizer if needed and shutdown isn't
+// already active
+if (map.isEmpty()
+&& !ShutdownHookManager.get().isShutdownInProgress()) {
+  ShutdownHookManager.get().addShutdownHook(clientFinalizer,
+  SHUTDOWN_HOOK_PRIORITY, timeout,
+  ShutdownHookManager.TIME_UNIT_DEFAULT);
+}
+// insert the new file system into the map
+fs.key = key;
+map.put(key, fs);
+if (conf.getBoolean(
+FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
+  toAutoClose.add(key);
+}
+  }
+} // end of synchronized block
+  } finally {
+// release the creator permit.
+creatorPermits.release();
+  }
+  if (fsToClose != null) {
+LOGGER.debug("Duplicate FS created for {}; discarding {}",
+uri, fs);
+discardedInstances.incrementAndGet();
+// close the new file system
+// note this will briefly remove and reinstate "fsToClose" from
+// the map. It is done in a synchronized block so will not be
+// visible to others.
+fsToClose.close();
   }
+  return fs;
+}
+
+/**
+ * Get the count of discarded instances.
+ * @return the new instance.
+ */
+long getDiscardedInstances() {

Review comment:
   Should we have this as @VisibleForTesting?

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCaching.java
##
@@ -336,4 +344,134 @@ public void testCacheIncludesURIUserInfo() throws 
Throwable {
 assertNotEquals(keyA, new FileSystem.Cache.Key(
 new URI("wasb://a:passw...@account.blob.core.windows.net"), conf));
   }
+
+
+  /**
+   * Single semaphore: no surplus 

[jira] [Assigned] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-17311:
-

Assignee: Bilahari T H  (was: Sneha Vijayarajan)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17311:
--
Status: Patch Available  (was: Open)

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?focusedWorklogId=506146=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-506146
 ]

ASF GitHub Bot logged work on HADOOP-17311:
---

Author: ASF GitHub Bot
Created on: 29/Oct/20 10:13
Start Date: 29/Oct/20 10:13
Worklog Time Spent: 10m 
  Work Description: bilaharith opened a new pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422


   Masking SAS signatures from logs
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 66
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 506146)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17311) ABFS: Logs should redact SAS signature

2020-10-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17311:

Labels: pull-request-available  (was: )

> ABFS: Logs should redact SAS signature
> --
>
> Key: HADOOP-17311
> URL: https://issues.apache.org/jira/browse/HADOOP-17311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, security
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Signature part of the SAS should be redacted for security purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #2422: HADOOP-17311. ABFS: Masking SAS signatures from logs

2020-10-29 Thread GitBox


bilaharith opened a new pull request #2422:
URL: https://github.com/apache/hadoop/pull/2422


   Masking SAS signatures from logs
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 66
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 459, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17329) mvn site commands fails due to MetricsSystemImpl changes

2020-10-29 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HADOOP-17329:
-
Fix Version/s: 3.2.3

> mvn site commands fails due to MetricsSystemImpl changes
> 
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17329) mvn site commands fails due to MetricsSystemImpl changes

2020-10-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222798#comment-17222798
 ] 

Xiaoqiao He commented on HADOOP-17329:
--

Thanks [~sunilg] for your reviews and commit, add 3.2.3 as target versions too 
since we also commit to branch-3.2. Thanks.

> mvn site commands fails due to MetricsSystemImpl changes
> 
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl edited a comment on pull request #2412: [branch-3.2] YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


smengcl edited a comment on pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412#issuecomment-718466315


   CI shadedclient also passed. Please kindly take another look at the patch 
@jojochuang.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2412: [branch-3.2] YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


smengcl commented on pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412#issuecomment-718466315


   CI shadedclient also passed. Please kindly take another look @jojochuang at 
the patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2412: [branch-3.2] YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


hadoop-yetus commented on pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412#issuecomment-718456812


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  16m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 23s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  41m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 17s |  hadoop-client-runtime in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 16s |  hadoop-client-minicluster in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  83m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2412/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2412 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 3c72a683739b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 14a4606 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~16.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2412/2/testReport/ |
   | Max. process+thread count | 426 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-runtime 
hadoop-client-modules/hadoop-client-minicluster U: hadoop-client-modules |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2412/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-29 Thread GitBox


ayushtkn commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r514032406



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -3569,6 +3569,9 @@ void setQuota(String src, long nsQuota, long ssQuota, 
StorageType type)
 if (type != null) {
   requireEffectiveLayoutVersionForFeature(Feature.QUOTA_BY_STORAGE_TYPE);
 }
+if (type == StorageType.NVDIMM) {
+  requireEffectiveLayoutVersionForFeature(Feature.NVDIMM_SUPPORT);

Review comment:
   This check should be done in case of setStoragePolicy also, if the 
storage policy is `ALLNVDIMM`

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -33,13 +33,12 @@
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public enum StorageType {
-  // sorted by the speed of the storage types, from fast to slow
   RAM_DISK(true, true),
-  NVDIMM(false, true),
   SSD(false, false),
   DISK(false, false),
   ARCHIVE(false, false),
-  PROVIDED(false, false);
+  PROVIDED(false, false),
+  NVDIMM(false, true);
 

Review comment:
   I am not sure but will getStoragePolicies also land up in something 
similar issue? Due to unavailability of storage type? The quota stuff shall be 
there for PROVIDED also but in case this backward incompatibility is there with 
Storage Policy too, Then we need to find out some way.
   @vinayakumarb do you have pointers or suggestions on this, how to tackle 
this?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
##
@@ -89,7 +89,8 @@ public static boolean supports(final LayoutFeature f, final 
int lv) {
 APPEND_NEW_BLOCK(-62, -61, "Support appending to new block"),
 QUOTA_BY_STORAGE_TYPE(-63, -61, "Support quota for specific storage 
types"),
 ERASURE_CODING(-64, -61, "Support erasure coding"),
-EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage");
+EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage"),
+NVDIMM_SUPPORT(-66, -66, "Support NVDIMM storage type");

Review comment:
   Not very aware, but yes, if I am decoding the comment correct, This 
should be 66 both.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2412: [branch-3.2] YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars

2020-10-29 Thread GitBox


smengcl commented on pull request #2412:
URL: https://github.com/apache/hadoop/pull/2412#issuecomment-718402415


   I ran `mvn verify -pl :hadoop-client-minicluster` on the latest commit. It 
passed
   
   ```
   [WARNING] maven-shade-plugin has detected that some class files are
   [WARNING] present in two or more JARs. When this happens, only one
   [WARNING] single version of the class is copied to the uber jar.
   [WARNING] Usually this is not harmful and you can skip these warnings,
   [WARNING] otherwise try to manually exclude artifacts based on
   [WARNING] mvn dependency:tree -Ddetail=true and the above output.
   [WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/
   [INFO] Replacing original artifact with shaded artifact.
   [INFO] Replacing 
/home/systest/verify/YARN-10314-branch-3.2/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.2.3-SNAPSHOT.jar
 with /home/systest
   
/verify/YARN-10314-branch-3.2/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.2.3-SNAPSHOT-shaded.jar
   [INFO] Dependency-reduced POM written at: 
/home/systest/verify/YARN-10314-branch-3.2/hadoop-client-modules/hadoop-client-minicluster/dependency-reduced-pom.xml
   [INFO]
   [INFO] --- license-maven-plugin:1.10:update-file-header (update-pom-license) 
@ hadoop-client-minicluster ---
   [INFO] Will search files to update from root 
/home/systest/verify/YARN-10314-branch-3.2/hadoop-client-modules/hadoop-client-minicluster
   [INFO] Scan 1 file header done in 20.565ms.
   [INFO]
* add header on 1 file.
   [INFO]
   [INFO] --- animal-sniffer-maven-plugin:1.16:check (signature-check) @ 
hadoop-client-minicluster ---
   [INFO] Checking unresolved references to 
org.codehaus.mojo.signature:java18:1.0
   [INFO]
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
hadoop-client-minicluster ---
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  02:28 min
   [INFO] Finished at: 2020-10-28T23:48:53-07:00
   [INFO] 

   ```
   
   Pending CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org