[jira] [Commented] (HADOOP-17091) [JDK11] Fix Javadoc errors

2020-08-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169671#comment-17169671
 ] 

Hudson commented on HADOOP-17091:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18488 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18488/])
HADOOP-17091. [JDK11] Fix Javadoc errors (#2098) (github: rev 
c40cbc57fa20d385a971a4b07af13fa28d5908c9)
* (edit) hadoop-project/pom.xml
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* (edit) Jenkinsfile
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (edit) hadoop-tools/hadoop-aws/pom.xml


> [JDK11] Fix Javadoc errors
> --
>
> Key: HADOOP-17091
> URL: https://issues.apache.org/jira/browse/HADOOP-17091
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
> Environment: Java 11
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17137) ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic

2020-07-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169112#comment-17169112
 ] 

Hudson commented on HADOOP-17137:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18484 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18484/])
HADOOP-17137. ABFS: Makes the test cases in ITestAbfsNetworkStatistics (github: 
rev a7fda2e38f2a06e18c2929dff0be978d5e0ef9d5)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java


> ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic
> -
>
> Key: HADOOP-17137
> URL: https://issues.apache.org/jira/browse/HADOOP-17137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: abfsactive
> Fix For: 3.4.0
>
>
> Tess in ITestAbfsNetworkStatistics have asserts to a  static number of 
> network calls made from the start of fileystem instance creation. But this 
> number of calls are dependent on the certain configs settings which allow 
> creation of container or account is HNS enabled to avoid GetAcl call.
>  
> The tests need to be modified to ensure that count asserts are made for the 
> requests made by the tests alone.
>  
> {code:java}
> [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[INFO] 
> Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] Tests 
> run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.148 s <<< 
> FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] 
> testAbfsHttpResponseStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 4.148 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> get_responses expected:<8> but was:<7> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics(ITestAbfsNetworkStatistics.java:207)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> [ERROR] 
> testAbfsHttpSendStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 2.987 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> connections_made expected:<6> but was:<5> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpSendStatistics(ITestAbfsNetworkStatistics.java:91)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> 

[jira] [Commented] (HADOOP-17153) Add boost installation steps to build instruction on CentOS 8

2020-07-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165128#comment-17165128
 ] 

Hudson commented on HADOOP-17153:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18471 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18471/])
HADOOP-17153. Add boost installation steps to build instruction on (github: rev 
4b1816c7d0188363896505d0f0f93cb58d44bcd9)
* (edit) BUILDING.txt


> Add boost installation steps to build instruction on CentOS 8
> -
>
> Key: HADOOP-17153
> URL: https://issues.apache.org/jira/browse/HADOOP-17153
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.4.0
>
>
> After HDFS-15385, -Pnative build fails without boost 1.72 used in libhdfs++. 
> It must be installed from source since boost 1.66 packaged by the CentOS 
> distribution does not match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17141) Add Capability To Get Text Length

2020-07-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164325#comment-17164325
 ] 

Hudson commented on HADOOP-17141:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18469 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18469/])
HADOOP-17141. Add Capability To Get Text Length (#2157) (github: rev 
e60096c377d8a3cb5bed3992352779195be95bb4)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


> Add Capability To Get Text Length
> -
>
> Key: HADOOP-17141
> URL: https://issues.apache.org/jira/browse/HADOOP-17141
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> The Hadoop {{Text}} class contains an array of byte which contain a UTF-8 
> encoded string.  However, there is no way to quickly get the length of that 
> string.  One can get the number of bytes in the byte array, but to figure out 
> the length of the String, it needs to be decoded first.  In this simple 
> example, sorting the {{Text}} objects by String length, the String needs to 
> be decoded from the byte array repeatedly.  This was brought to my attention 
> based on [HIVE-23870].
> {code:java}
>   public static void main(String[] args) {
> List list = Arrays.asList(new Text("1"), new Text("22"), new 
> Text("333"));
> list.sort((Text t1, Text t2) -> t1.toString().length() - 
> t2.toString().length());
>   }
> {code}
> Also helpful if I want to check the last letter in the {{Text}} object 
> repeatedly:
> {code:java}
> Text t = new Text("");
> System.out.println(t.charAt(t.toString().length() - 1));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17113) Adding ReadAhead Counters in ABFS

2020-07-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162969#comment-17162969
 ] 

Hudson commented on HADOOP-17113:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18465 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18465/])
HADOOP-17113. Adding ReadAhead Counters in ABFS (#2154) (github: rev 
48a7c5b6baf3cbf5ef85433c348753842eb8ec7d)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatisticsImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatistics.java


> Adding ReadAhead Counters in ABFS
> -
>
> Key: HADOOP-17113
> URL: https://issues.apache.org/jira/browse/HADOOP-17113
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead 
> feature in ABFS. This would include 2 counters:
> |READ_AHEAD_BYTES_READ|number of bytes read by readAhead|
> |READ_AHEAD_REMOTE_BYTES_READ|number of bytes not used after readAhead was 
> used|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17147) Dead link in hadoop-kms/index.md.vm

2020-07-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162909#comment-17162909
 ] 

Hudson commented on HADOOP-17147:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18464/])
HADOOP-17147. Dead link in hadoop-kms/index.md.vm. Contributed by (aajisaka: 
rev d5b476615820a7fa75b41e323db5deb5c2ed3bd5)
* (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm


> Dead link in hadoop-kms/index.md.vm
> ---
>
> Key: HADOOP-17147
> URL: https://issues.apache.org/jira/browse/HADOOP-17147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17147.000.patch
>
>
> There is a dead link 
> (https://hadoop.apache.org/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html)
>  in 
> https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#KMS_over_HTTPS_.28SSL.29
> The link should be 
> https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162474#comment-17162474
 ] 

Hudson commented on HADOOP-17138:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18462/])
HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6. (github: 
rev 1b29c9bfeee0035dd042357038b963843169d44c)
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/ThrottledAsyncChecker.java
* (edit) hadoop-mapreduce-project/dev-support/findbugs-exclude.xml
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.4.0
>
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162166#comment-17162166
 ] 

Hudson commented on HADOOP-17092:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18461/])
HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw (github: rev 
b4b23ef0d1a0afe6251370a61f922ecdb1624165)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ExponentialRetryPolicy.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAzureADAuthenticator.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java


> ABFS: Long waits and unintended retries when multiple threads try to fetch 
> token using ClientCreds
> --
>
> Key: HADOOP-17092
> URL: https://issues.apache.org/jira/browse/HADOOP-17092
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: abfsactive
> Fix For: 3.4.0
>
>
> Issue reported by DB:
> we recently experienced some problems with ABFS driver that highlighted a 
> possible issue with long hangs following synchronized retries when using the 
> _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have 
> seen 
> [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D=0],
>  but it does not directly apply since we are not using a custom token 
> provider, but instead _ClientCredsTokenProvider_ that ultimately relies on 
> _AzureADAuthenticator_. 
>  
> The problem was that the critical section of getAccessToken, combined with a 
> possibly redundant retry policy, made jobs hanging for a very long time, 
> since only one thread at a time could make progress, and this progress 
> amounted to basically retrying on a failing connection for 30-60 minutes.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161415#comment-17161415
 ] 

Hudson commented on HADOOP-17119:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18455 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18455/])
HADOOP-17119. Jetty upgrade to 9.4.x causes MR app fail with (ayushsaxena: rev 
f2033de2342d20d5f540775dfe4848d452c68957)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> Jetty upgrade to 9.4.x causes MR app fail with IOException
> --
>
> Key: HADOOP-17119
> URL: https://issues.apache.org/jira/browse/HADOOP-17119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch
>
>
> I think we should catch IOException here instead of BindException in 
> HttpServer2#bindForPortRange
> {code:java}
>  for(Integer port : portRanges) {
>   if (port == startPort) {
> continue;
>   }
>   Thread.sleep(100);
>   listener.setPort(port);
>   try {
> bindListener(listener);
> return;
>   } catch (BindException ex) {
> // Ignore exception. Move to next port.
> ioException = ex;
>   }
> }
> {code}
> Stacktrace:
> {code:java}
>  HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142
> java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
>   at 
> org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
>   at 
> org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440)
>   at 
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
>   ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161356#comment-17161356
 ] 

Hudson commented on HADOOP-17136:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18453 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18453/])
HADOOP-17136. ITestS3ADirectoryPerformance.testListOperations failing (github: 
rev bb459d4dd607d3e4d259e3c8cc47b93062d78e4d)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADirectoryPerformance.java


> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17107) hadoop-azure parallel tests not working on recent JDKs

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161156#comment-17161156
 ] 

Hudson commented on HADOOP-17107:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18452 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18452/])
HADOOP-17107. hadoop-azure parallel tests not working on recent JDKs (github: 
rev 9f407bcc88a315dd72ba4c2e9935f3a94d2e0174)
* (edit) hadoop-tools/hadoop-azure/pom.xml


> hadoop-azure parallel tests not working on recent JDKs
> --
>
> Key: HADOOP-17107
> URL: https://issues.apache.org/jira/browse/HADOOP-17107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/azure
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> recent JDKs are failing to run the wasb or abfs parallel test runs -unable to 
> instantiate the javascript engine.
> Maybe it's been cut from the JVM or the ant script task can't bind to it.
> Fix is as HADOOP-14696 -use our own plugin to set up the test dirs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16682) Remove unnecessary toString() invocations

2020-07-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160507#comment-17160507
 ] 

Hudson commented on HADOOP-16682:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18449 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18449/])
HADOOP-16682. ABFS: Removing unnecessary toString() invocations (github: rev 
99655167f308b9c59e66b1b5d0d1fd5741cd75de)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java


> Remove unnecessary toString() invocations
> -
>
> Key: HADOOP-16682
> URL: https://issues.apache.org/jira/browse/HADOOP-16682
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jeetesh Mangwani
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: abfsactive
>
> Remove unnecessary toString() invocations from the hadoop-azure module
> For example:
> permission.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L386
> path.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L795



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in Hadoop

2020-07-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17160383#comment-17160383
 ] 

Hudson commented on HADOOP-17100:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18447/])
HADOOP-17100. Replace Guava Supplier with Java8+ Supplier in Hadoop. 
(ayushsaxena: rev 6bcb24d26930b3a2abfdd533f4aea0ce670c78a1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestThrottledAsyncChecker.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FCStatisticsBaseTest.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestTaskHeartbeatHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/SetSpanReceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestAmFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCorruptMetadataFile.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingInvalidateBlock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRandomOpsWithSnapshots.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* (edit) 

[jira] [Commented] (HADOOP-16866) Upgrade spotbugs to 4.0.6

2020-07-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17159869#comment-17159869
 ] 

Hudson commented on HADOOP-16866:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18446 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18446/])
HADOOP-16866. Upgrade spotbugs to 4.0.6. (#2146) (github: rev 
2ba44a73bf2bb7ef33a2259bd19ee62ef9bb5659)
* (edit) hadoop-project/pom.xml


> Upgrade spotbugs to 4.0.6
> -
>
> Key: HADOOP-16866
> URL: https://issues.apache.org/jira/browse/HADOOP-16866
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.4.0
>
>
> [https://github.com/spotbugs/spotbugs/releases]
> spotbugs 4.0.0 is now released. 
>  
> We can upgrade spotbugs' version to:
> 1. 3.1.12  (conservative option)
> 2. 4.0.0 (which might includes incompatible changes, according to the 
> migration guide: [https://spotbugs.readthedocs.io/en/stable/migration.html])
>  
> Step by step approach is also acceptable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17130) Configuration.getValByRegex() shouldn't update the results while fetching.

2020-07-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17159366#comment-17159366
 ] 

Hudson commented on HADOOP-17130:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18444/])
HADOOP-17130. Configuration.getValByRegex() shouldn't be updating the (github: 
rev b21cb91c7f766b4d2920b893f756a0431f925a18)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Configuration.getValByRegex() shouldn't update the results while fetching.
> --
>
> Key: HADOOP-17130
> URL: https://issues.apache.org/jira/browse/HADOOP-17130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.1.3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> We have seen this stacktrace while using ABFS file system. After analysing 
> the stack trace we can see that getValByRegex() is reading the properties and 
> substituting the value in the same call. This may cause the 
> ConcurrentModificationException. 
> {code:java}
> Caused by: java.util.concurrent.ExecutionException: 
> java.util.ConcurrentModificationException at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192) at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1877)
>  ... 18 more Caused by: java.util.ConcurrentModificationException at 
> java.util.Hashtable$Enumerator.next(Hashtable.java:1387) at 
> org.apache.hadoop.conf.Configuration.getValByRegex(Configuration.java:3855) 
> at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.validateStorageAccountKeys(AbfsConfiguration.java:689)
>  at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.(AbfsConfiguration.java:237)
>  at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:154)
>  at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:113)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3396) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3456) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3424) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:518) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17129) Validating storage keys in ABFS correctly

2020-07-16 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17159346#comment-17159346
 ] 

Hudson commented on HADOOP-17129:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18443/])
HADOOP-17129. Validating storage keys in ABFS correctly (#2141) (github: rev 
4083fd57b5e0465a06bab82f6f6e09faa0c0388c)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/SimpleKeyProvider.java


> Validating storage keys in ABFS correctly
> -
>
> Key: HADOOP-17129
> URL: https://issues.apache.org/jira/browse/HADOOP-17129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> Storage Keys in ABFS should be validated after the keys have been loaded.
> work:
>  - Remove the previous validation of storage keys.
>  - Validate at the correct place.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158532#comment-17158532
 ] 

Hudson commented on HADOOP-17099:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18442/])
HADOOP-17099. Replace Guava Predicate with Java8+ Predicate (jeagles: rev 
1f71c4ae71427a8a7476eaef64187a5643596552)
* (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileController.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CombinedHostFileManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/MetricsRecords.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java


> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HADOOP-17099.004.patch, HADOOP-17099.005.patch, 
> HADOOP-17099.006.patch, HADOOP-17099.007.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158362#comment-17158362
 ] 

Hudson commented on HADOOP-17101:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18441/])
HADOOP-17101. Replace Guava Function with Java8+ Function (jeagles: rev 
98fcffe93f9ef910654574f69591fcdc621523af)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFileInputFormat.java
* (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostSet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java


> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.005.patch, HADOOP-17101.006.patch, 
> HADOOP-17101.008.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17127) Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157567#comment-17157567
 ] 

Hudson commented on HADOOP-17127:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18435 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18435/])
HADOOP-17127. Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and (xkrogen: 
rev 317fe4584a51cfe553e4098d48170cd2898b9732)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcScheduler.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime
> --
>
> Key: HADOOP-17127
> URL: https://issues.apache.org/jira/browse/HADOOP-17127
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-17127.001.patch, HADOOP-17127.002.patch
>
>
> While making an internal change to use {{TimeUnit.MICROSECONDS}} instead of 
> {{TimeUnit.MILLISECONDS}} for rpc details, we found that we also had to 
> modify this code in DecayRpcScheduler.addResponseTime() to initialize 
> {{queueTime}} and {{processingTime}} with the correct units.
> {noformat}
> long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS);
> long processingTime = details.get(Timing.PROCESSING, 
> TimeUnit.MILLISECONDS);
> {noformat}
> If we change these to use {{RpcMetrics.TIMEUNIT}} it is simpler.
> We also found one test case in TestRPC that was assuming the units were 
> milliseconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17022) Tune listFiles() api.

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157433#comment-17157433
 ] 

Hudson commented on HADOOP-17022:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18434 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18434/])
HADOOP-17022. Tune S3AFileSystem.listFiles() API. (stevel: rev 
4647a60430136aa4abc18d5112b93a8b927dbd1f)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java


> Tune listFiles() api.
> -
>
> Key: HADOOP-17022
> URL: https://issues.apache.org/jira/browse/HADOOP-17022
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> Similar optimisation which was done for listLocatedSttaus() 
> https://issues.apache.org/jira/browse/HADOOP-16465  can done for listFiles() 
> and listStatus() api as well. 
> This is going to reduce the number of remote calls in case of directory 
> listing.
>  
> CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16998) WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException

2020-07-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157359#comment-17157359
 ] 

Hudson commented on HADOOP-16998:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18432 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18432/])
HADOOP-16998. WASB : NativeAzureFsOutputStream#close() throwing (github: rev 
380e0f4506a818d6337271ae6d996927f70b601b)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SyncableDataOutputStream.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestSyncableDataOutputStream.java


> WASB : NativeAzureFsOutputStream#close() throwing IllegalArgumentException
> --
>
> Key: HADOOP-16998
> URL: https://issues.apache.org/jira/browse/HADOOP-16998
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
> Attachments: HADOOP-16998.patch
>
>
> During HFile create, at the end when called close() on the OutputStream, 
> there is some pending data to get flushed. When this flush happens, an 
> Exception is thrown back from Storage. The Azure-storage SDK layer will throw 
> back IOE. (Even if it is a StorageException thrown from the Storage, the SDK 
> converts it to IOE.) But at HBase, we end up getting IllegalArgumentException 
> which causes the RS to get aborted. If we get back IOE, the flush will get 
> retried instead of aborting RS.
> The reason is this
> NativeAzureFsOutputStream uses Azure-storage SDK's BlobOutputStreamInternal. 
> But the BlobOutputStreamInternal is wrapped within a SyncableDataOutputStream 
> which is a FilterOutputStream. During the close op, NativeAzureFsOutputStream 
> calls close on SyncableDataOutputStream and it uses below method from 
> FilterOutputStream
> {code}
> public void close() throws IOException {
>   try (OutputStream ostream = out) {
>   flush();
>   }
> }
> {code}
> Here the flush call caused an IOE to be thrown to here. The finally will 
> issue close call on ostream (Which is an instance of BlobOutputStreamInternal)
> When BlobOutputStreamInternal#close() is been called, if there was any 
> exception already occured on that Stream, it will throw back the same 
> Exception
> {code}
> public synchronized void close() throws IOException {
>   try {
>   // if the user has already closed the stream, this will throw a 
> STREAM_CLOSED exception
>   // if an exception was thrown by any thread in the 
> threadExecutor, realize it now
>   this.checkStreamState();
>   ...
> }
> private void checkStreamState() throws IOException {
>   if (this.lastError != null) {
>   throw this.lastError;
>   }
> }
> {code}
> So here both try and finally block getting Exceptions and Java uses 
> Throwable#addSuppressed() 
> Within this method if both Exceptions are same objects, it throws back 
> IllegalArgumentException
> {code}
> public final synchronized void addSuppressed(Throwable exception) {
>   if (exception == this)
>  throw new 
> IllegalArgumentException(SELF_SUPPRESSION_MESSAGE, exception);
>   
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy

2020-07-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156956#comment-17156956
 ] 

Hudson commented on HADOOP-17116:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18430 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18430/])
HADOOP-17116. Skip Retry INFO logging on first failover from a proxy 
(hanishakoneru: rev e62d8f841275ee47a0ba911415aac9e39af291c6)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java


> Skip Retry INFO logging on first failover from a proxy
> --
>
> Key: HADOOP-17116
> URL: https://issues.apache.org/jira/browse/HADOOP-17116
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, 
> HADOOP-17116.003.patch
>
>
> RetryInvocationHandler logs an INFO level message on every failover except 
> the first. This used to be ideal before when there were only 2 proxies in the 
> FailoverProxyProvider. But if there are more than 2 proxies (as is possible 
> with 3 or more NNs in HA), then there could be more than one failover to find 
> the currently active proxy.
> To avoid creating noise in clients logs/ console, RetryInvocationHandler 
> should skip logging once for each proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17105) S3AFS globStatus attempts to resolve symlinks

2020-07-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156913#comment-17156913
 ] 

Hudson commented on HADOOP-17105:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18428 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18428/])
HADOOP-17105. S3AFS - Do not attempt to resolve symlinks in globStatus (github: 
rev 806d84b79c97cd0bbed324f6a324d7c110a6fd87)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java


> S3AFS globStatus attempts to resolve symlinks
> -
>
> Key: HADOOP-17105
> URL: https://issues.apache.org/jira/browse/HADOOP-17105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jimmy Zuber
>Assignee: Jimmy Zuber
>Priority: Minor
> Fix For: 3.3.1
>
>
> The S3AFileSystem implementation of the globStatus API has a setting 
> configured to resolve symlinks. Under certain circumstances, this will cause 
> additional file existence checks to be performed in order to determine if a 
> FileStatus signifies a symlink. As symlinks are not supported in 
> S3AFileSystem, these calls are unnecessary.
> Code snapshot (permalink): 
> [https://github.com/apache/hadoop/blob/2a67e2b1a0e3a5f91056f5b977ef9c4c07ba6718/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L4002]
> Causes additional getFileStatus call here (permalink): 
> [https://github.com/apache/hadoop/blob/1921e94292f0820985a0cfbf8922a2a1a67fe921/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L308]
> Current code snippet:
> {code:java}
> /**
>* Override superclass so as to disable symlink resolution and so avoid
>* some calls to the FS which may have problems when the store is being
>* inconsistent.
>* {@inheritDoc}
>*/
>   @Override
>   public FileStatus[] globStatus(
>   final Path pathPattern,
>   final PathFilter filter)
>   throws IOException {
> entryPoint(INVOCATION_GLOB_STATUS);
> return Globber.createGlobber(this)
> .withPathPattern(pathPattern)
> .withPathFiltern(filter)
> .withResolveSymlinks(true)
> .build()
> .glob();
>   }
> {code}
>  
> The fix should be pretty simple, just flip "withResolveSymlinks" to false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-07-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154940#comment-17154940
 ] 

Hudson commented on HADOOP-17079:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18423/])
HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet. (#2085) 
(github: rev f91a8ad88b00b50231f1ae3f8820a25c963bb561)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkTagMappingJsonManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterUserMappings.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/security/GroupsService.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestRuleBasedLdapGroupsMapping.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/JavaSandboxLinuxContainerRuntime.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NullGroupsMapping.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/SecondaryGroupExistingPlacementRule.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMappingWithFallback.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/PeriodGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMappingWithFallback.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMapping.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterPermissionChecker.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCompositeGroupMapping.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAdminService.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SimpleGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PrimaryGroupPlacementRule.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/Groups.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/RuleBasedLdapGroupsMapping.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRefreshSuperUserGroupsConfiguration.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesAcls.java
* (edit) 

[jira] [Commented] (HADOOP-17117) Fix typos in hadoop-aws documentation

2020-07-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153684#comment-17153684
 ] 

Hudson commented on HADOOP-17117:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18419 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18419/])
HADOOP-17117 Fix typos in hadoop-aws documentation (#2127) (github: rev 
5b1ed2113b8e938ab2ff0fef7948148cb07e0457)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committer_architecture.md


> Fix typos in hadoop-aws documentation
> -
>
> Key: HADOOP-17117
> URL: https://issues.apache.org/jira/browse/HADOOP-17117
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Reporter: Sebastian Nagel
>Assignee: Sebastian Nagel
>Priority: Trivial
> Fix For: 3.3.1, 3.4.0
>
>
> There are couple of typos in the hadoop-aws documentation (markdown). I'll 
> open a PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17081) MetricsSystem doesn't start the sink adapters on restart

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152107#comment-17152107
 ] 

Hudson commented on HADOOP-17081:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18411 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18411/])
HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart (github: 
rev 2f500e4635ea4347a55693b1a10a4a4465fe5fac)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java


> MetricsSystem doesn't start the sink adapters on restart
> 
>
> Key: HADOOP-17081
> URL: https://issues.apache.org/jira/browse/HADOOP-17081
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
> Environment: NA
>Reporter: Madhusoodan
>Assignee: Madhusoodan
>Priority: Minor
> Fix For: 3.2.2, 3.3.1
>
>
> In HBase we use dynamic metrics and when a metric is removed, we have to 
> refresh the JMX beans, since there is no API from Java to do it, a hack like 
> stopping the metrics system and restarting it was used (Read the comment on 
> the class 
> [https://github.com/mmpataki/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/impl/JmxCacheBuster.java])
>  
> It calls the below APIs in the same order
>  MetricsSystem.stop
>  MetricsSystem.start
>  
> MetricsSystem.stop stops all the SinkAdapters, *but doesn't remove them from 
> the sink list* (allSinks is the variable). When the metrics system is started 
> again, *it is assumed that the SinkAdapters are restarted, but they are not* 
> due to the check done in the beginning of the function register.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17111) Replace Guava Optional with Java8+ Optional

2020-07-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151839#comment-17151839
 ] 

Hudson commented on HADOOP-17111:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18409 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18409/])
HADOOP-17111. Replace Guava Optional with Java8+ Optional. Contributed 
(aajisaka: rev 639acb6d8921127cde3174a302f2e3d71b44f052)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStorePerf.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml


> Replace Guava Optional with Java8+ Optional
> ---
>
> Key: HADOOP-17111
> URL: https://issues.apache.org/jira/browse/HADOOP-17111
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17111.001.patch, HADOOP-17111.002.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Optional' in project with mask 
> '*.java'
> Found Occurrences  (3 usages found)
> org.apache.hadoop.yarn.server.nodemanager  (2 usages found)
> DefaultContainerExecutor.java  (1 usage found)
> 71 import com.google.common.base.Optional;
> LinuxContainerExecutor.java  (1 usage found)
> 22 import com.google.common.base.Optional;
> org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
> TestZKRMStateStorePerf.java  (1 usage found)
> 21 import com.google.common.base.Optional;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17058) Support for Appendblob in abfs driver

2020-07-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151419#comment-17151419
 ] 

Hudson commented on HADOOP-17058:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18407/])
HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver (github: rev 
d20109c171460f3312a760c1309f95b2bf61e0d3)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemE2E.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/HttpQueryParams.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsReadWriteAndSeek.java


> Support for Appendblob in abfs driver
> -
>
> Key: HADOOP-17058
> URL: https://issues.apache.org/jira/browse/HADOOP-17058
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Ishani
>Priority: Major
>
> add changes to support appendblob in the hadoop-azure abfs driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-07-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17150942#comment-17150942
 ] 

Hudson commented on HADOOP-16961:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18403 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18403/])
HADOOP-16961. ABFS: Adding metrics to AbfsInputStream (#2076) (github: rev 
3b5c9a90c07e6360007f3f4aa357aa665b47ca3a)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsInputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatistics.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamStatisticsImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java


> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-07-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151077#comment-17151077
 ] 

Hudson commented on HADOOP-17086:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18404 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18404/])
HADOOP-17086. ABFS: Making the ListStatus response ignore unknown (github: rev 
e0cededfbd2f11919102f01f9bf3ce540ffd6e94)
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ListResultSchemaTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResultEntrySchema.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResultSchema.java


> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.3.1
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> 

[jira] [Commented] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149289#comment-17149289
 ] 

Hudson commented on HADOOP-17084:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18400 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18400/])
HADOOP-17084 Update Dockerfile_aarch64 to use Bionic (#2103). (github: rev 
6c57be48973e182f8141d166102bdc513b944900)
* (edit) dev-support/docker/Dockerfile_aarch64


> Update Dockerfile_aarch64 to use Bionic
> ---
>
> Key: HADOOP-17084
> URL: https://issues.apache.org/jira/browse/HADOOP-17084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: RuiChen
>Assignee: zhaorenhai
>Priority: Major
> Fix For: 3.4.0
>
>
> Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
> changes, we should make Dockerfile for aarch64 following these changes, keep 
> same behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149269#comment-17149269
 ] 

Hudson commented on HADOOP-17090:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18399/])
HADOOP-17090. Increase precommit job timeout from 5 hours to 20 hours. (github: 
rev 4e37ad59b865d95b63d72523a083fed9beabc72b)
* (edit) Jenkinsfile


> Increase precommit job timeout from 5 hours to 20 hours
> ---
>
> Key: HADOOP-17090
> URL: https://issues.apache.org/jira/browse/HADOOP-17090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
>
> Now we frequently increase the timeout for testing and undo the change before 
> committing.
> * https://github.com/apache/hadoop/pull/2026
> * https://github.com/apache/hadoop/pull/2051
> * https://github.com/apache/hadoop/pull/2012
> * https://github.com/apache/hadoop/pull/2098
> * and more...
> I'd like to increase the timeout by default to reduce the work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17032) Handle an internal dir in viewfs having multiple children mount points pointing to different filesystems

2020-07-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149193#comment-17149193
 ] 

Hudson commented on HADOOP-17032:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18398 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18398/])
HADOOP-17032. Fix getContentSummary in ViewFileSystem to handle multiple 
(github: rev 3b8d0f803f1c6277f2c17a73cf4803ab0bd9954b)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> Handle an internal dir in viewfs having multiple children mount points 
> pointing to different filesystems
> 
>
> Key: HADOOP-17032
> URL: https://issues.apache.org/jira/browse/HADOOP-17032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Reporter: Abhishek Das
>Assignee: Abhishek Das
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
>
> In case the viefs mount table is configured in a way where multiple child 
> mount points are pointing to different file systems, the getContentSummary or 
> getStatus don't return the expected result
> {code:java}
> mount link /a/b/ → hdfs://nn1/a/b
>  mount link /a/d/ → file:///nn2/c/d{code}
> b has two files and d has 1 file. So getContentSummary on / should return 3 
> files.
> It also fails for the following scenario:
> {code:java}
> mount link  /internalDir -> /internalDir/linternalDir2
> mount link  /internalDir -> /internalDir/linkToDir2 -> hdfs://nn1/dir2{code}
> Exception:
> {code:java}
> java.io.IOException: Internal implementation error: expected file name to be 
> /java.io.IOException: Internal implementation error: expected file name to be 
> /
>  at 
> org.apache.hadoop.fs.viewfs.InternalDirOfViewFs.checkPathIsSlash(InternalDirOfViewFs.java:88)
>  at 
> org.apache.hadoop.fs.viewfs.InternalDirOfViewFs.getFileStatus(InternalDirOfViewFs.java:154)
>  at org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1684) 
> at org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1695) at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.getContentSummary(ViewFileSystem.java:918)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystemBaseTest.testGetContentSummary(ViewFileSystemBaseTest.java:1106){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-06-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148490#comment-17148490
 ] 

Hudson commented on HADOOP-16798:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18392 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18392/])
HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963) (github: rev 
4249c04d454ca82aadeed152ab777e93474754ab)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/StagingCommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/Tasks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/TestTasks.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/staging/PartitionedStagingCommitter.java


> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17089) WASB: Update azure-storage-java SDK

2020-06-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144661#comment-17144661
 ] 

Hudson commented on HADOOP-17089:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18378 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18378/])
HADOOP-17089: WASB: Update azure-storage-java SDK Contributed by Thomas (tmarq: 
rev 4b5b54c73f2fd9146237087a59453e2b5d70f9ed)
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java


> WASB: Update azure-storage-java SDK
> ---
>
> Key: HADOOP-17089
> URL: https://issues.apache.org/jira/browse/HADOOP-17089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Major
> Fix For: 3.3.1
>
>
> WASB depends on the Azure Storage Java SDK.  There is a concurrency bug in 
> the Azure Storage Java SDK that can cause the results of a list blobs 
> operation to appear empty.  This causes the Filesystem listStatus and similar 
> APIs to return empty results.  This has been seen in Spark work loads when 
> jobs use more than one executor core. 
> See [https://github.com/Azure/azure-storage-java/pull/546] for details on the 
> bug in the Azure Storage SDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17068) client fails forever when namenode ipaddr changed

2020-06-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142781#comment-17142781
 ] 

Hudson commented on HADOOP-17068:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18375 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18375/])
HADOOP-17068. Client fails forever when namenode ipaddr changed. (hexiaoqiao: 
rev fa14e4bc001e28d9912e8d985d09bab75aedb87c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> client fails forever when namenode ipaddr changed
> -
>
> Key: HADOOP-17068
> URL: https://issues.apache.org/jira/browse/HADOOP-17068
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17068.001.patch, HDFS-15390.01.patch
>
>
> For machine replacement, I replace my standby namenode with a new ipaddr and 
> keep the same hostname. Also update the client's hosts to make it resolve 
> correctly
> When I try to run failover to transite the new namenode(let's say nn2), the 
> client will fail to read or write forever until it's restarted.
> That make yarn nodemanager in sick state. Even the new tasks will encounter 
> this exception  too. Until all nodemanager restart.
>  
> {code:java}
> 20/06/02 15:12:25 WARN ipc.Client: Address change detected. Old: 
> nn2-192-168-1-100/192.168.1.100:9000 New: nn2-192-168-1-100/192.168.1.200:9000
> 20/06/02 15:12:25 DEBUG ipc.Client: closing ipc connection to 
> nn2-192-168-1-100/192.168.1.200:9000: Connection refused
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1517)
> at org.apache.hadoop.ipc.Client.call(Client.java:1440)
> at org.apache.hadoop.ipc.Client.call(Client.java:1401)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:193)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> {code}
>  
> We can see the client has {{Address change detected}}, but it still fails. I 
> find out that's because when method {{updateAddress()}} return true,  the 
> {{handleConnectionFailure()}} thow an exception that break the next retry 
> with the right ipaddr.
> Client.java: setupConnection()
> {code:java}
> } catch (ConnectTimeoutException toe) {
>   /* Check for an address change and update the local reference.
>* Reset the failure counter if the address was changed
>*/
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
>   handleConnectionTimeout(timeoutFailures++,
>   maxRetriesOnSocketTimeouts, toe);
> } catch (IOException ie) {
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
> // because the namenode ip changed in updateAddress(), the old namenode 
> ipaddress cannot be accessed now
> // handleConnectionFailure will thow an exception, the next retry never have 
> a chance to use the right server updated in updateAddress()
>   handleConnectionFailure(ioFailures++, ie);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17009) Embrace Immutability of Java Collections

2020-06-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140724#comment-17140724
 ] 

Hudson commented on HADOOP-17009:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18367 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18367/])
HADOOP-17009: Embrace Immutability of Java Collections (github: rev 
100ec8e8709e79a6729aab0dac15e080dd747ee5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/lib/StaticUserWebFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/CachedDNSToSwitchMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/launcher/ServiceLauncher.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Stat.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/MBeans.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/CompositeService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java


> Embrace Immutability of Java Collections
> 
>
> Key: HADOOP-17009
> URL: https://issues.apache.org/jira/browse/HADOOP-17009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17065) Adding Network Counters in ABFS

2020-06-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140543#comment-17140543
 ] 

Hudson commented on HADOOP-17065:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18366 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18366/])
HADOOP-17065. Add Network Counters to ABFS (#2056) (github: rev 
3472c3efc0014237d0cc4d9a989393b8513d2ab6)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingIntercept.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsNetworkStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsStatistics.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientThrottlingAnalyzer.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsCountersImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java


> Adding Network Counters in ABFS
> ---
>
> Key: HADOOP-17065
> URL: https://issues.apache.org/jira/browse/HADOOP-17065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> Network Counters to be added in ABFS:
> |CONNECTIONS_MADE|Number of times connection was made with Azure Data Lake|
> |SEND_REQUESTS|Number of send requests|
> |GET_RESPONSE|Number of response gotten|
> |BYTES_SEND|Number of bytes send|
> |BYTES_RECEIVED|Number of bytes received|
> |READ_THROTTLE|Number of times throttled while read operation|
> |WRITE_THROTTLE|Number of times throttled while write operation|
> propose:
>  * Adding these counters as part of AbfsStatistic already made in 
> HADOOP-17016.
>  * Increment of counters across Abfs Network services.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16888) [JDK11] Support JDK11 in the precommit job

2020-06-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140211#comment-17140211
 ] 

Hudson commented on HADOOP-16888:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18364 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18364/])
HADOOP-16888. [JDK11] Support JDK11 in the precommit job (#2012) (github: rev 
9821b94c946b5102f34e39f58493d31a0bb93547)
* (edit) dev-support/docker/Dockerfile
* (edit) Jenkinsfile


> [JDK11] Support JDK11 in the precommit job
> --
>
> Key: HADOOP-16888
> URL: https://issues.apache.org/jira/browse/HADOOP-16888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> Install openjdk-11 in the Dockerfile and use Yetus multijdk plugin to run 
> precommit job in both jdk8 and jdk11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17076) ABFS: Delegation SAS Generator Updates

2020-06-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138955#comment-17138955
 ] 

Hudson commented on HADOOP-17076:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18360 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18360/])
HADOOP-17076: ABFS: Delegation SAS Generator Updates Contributed by (tmarq: rev 
caf3995ac2bbc3241896babb9a607272462f70ca)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/SASGenerator.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/ServiceSASGenerator.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/DelegationSASGenerator.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java


> ABFS: Delegation SAS Generator Updates
> --
>
> Key: HADOOP-17076
> URL: https://issues.apache.org/jira/browse/HADOOP-17076
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Minor
> Fix For: 3.3.0
>
>
> # The authentication version in the service has been updated from Dec19 to 
> Feb20, so need to update the client.
>  # Add support and test cases for getXattr and setXAttr.
>  # Update DelegationSASGenerator and related to use Duration instead of int 
> for time periods.
>  # Cleanup DelegationSASGenerator switch/case statement that maps operations 
> to permissions.
>  # Cleanup SASGenerator classes to use String.equals instead of ==.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17020) Improve RawFileSystem Performance

2020-06-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138570#comment-17138570
 ] 

Hudson commented on HADOOP-17020:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18358 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18358/])
HADOOP-17020. Improve RawFileSystem Performance (#2063) (github: rev 
2bfb22840acc9f96a8bdec1ef82da37d06937da8)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


> Improve RawFileSystem Performance
> -
>
> Key: HADOOP-17020
> URL: https://issues.apache.org/jira/browse/HADOOP-17020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Assignee: Mehakmeet Singh
>Priority: Minor
> Fix For: 3.3.1
>
> Attachments: HADOOP-17020.1.patch, Screenshot 2020-04-29 at 5.24.53 
> PM.png, Screenshot 2020-05-01 at 7.12.06 AM.png
>
>
> Improving RawFileSystem performance.
> Changes:
>  * RawLocalFileSystem could localize the default block size to avoid sync 
> bottleneck with a Configuration object.
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java#L666]
>  * Exists() override method for optimization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138258#comment-17138258
 ] 

Hudson commented on HADOOP-9851:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18356 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18356/])
HADOOP-9851. dfs -chown does not like "+" plus sign in user name. (ayushsaxena: 
rev c8ed33cd2a4b92618ba2bd7d2cd6cc7961690e44)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShellPermissions.java


> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-9851.01.patch, HADOOP-9851.02.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17046) Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes.

2020-06-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134441#comment-17134441
 ] 

Hudson commented on HADOOP-17046:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18348 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18348/])
HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations (github: 
rev e15408477017753ea1a0896c8f54daeadee40d10)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcWritable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/SCMAdminProtocolPBClientImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
* (add) 
hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine2.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceManagerAdministrationProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/RpcServerFactoryPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockNamenode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/server/HSAdminServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ClientSCMProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpcServerHandoff.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ApplicationClientProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/DistributedSchedulingAMProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeLifelineProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ApplicationMasterProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceTrackerPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* (edit) 

[jira] [Commented] (HADOOP-17060) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-10 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17132761#comment-17132761
 ] 

Hudson commented on HADOOP-17060:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18345 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18345/])
HADOOP-17060. Clarify listStatus and getFileStatus behaviors (github: rev 
93b121a9717bb4ef5240fda877ebb5275f6446b4)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/Hdfs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


> listStatus and getFileStatus behave inconsistent in the case of ViewFs 
> implementation for isDirectory
> -
>
> Key: HADOOP-17060
> URL: https://issues.apache.org/jira/browse/HADOOP-17060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Srinivasu Majeti
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: viewfs
> Fix For: 3.4.0
>
>
> listStatus implementation in ViewFs and getFileStatus does not return 
> consistent values for an element on isDirectory value. listStatus returns 
> isDirectory of all softlinks as false and getFileStatus returns isDirectory 
> as true.
> {code:java}
> [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop 
> classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/"
> FileStatus of viewfs://c3121/testme21may isDirectory:false
> FileStatus of viewfs://c3121/tmp isDirectory:false
> FileStatus of viewfs://c3121/foo isDirectory:false
> FileStatus of viewfs://c3121/tmp21may isDirectory:false
> FileStatus of viewfs://c3121/testme isDirectory:false
> FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false
> FileStatus of / isDirectory:true
> [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop 
> classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2
> FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false
> FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true
> FileStatus of /testme2 isDirectory:true <--- returns true
> [hdfs@c3121-node2 ~]$ {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17050) S3A to support additional token issuers

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129302#comment-17129302
 ] 

Hudson commented on HADOOP-17050:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18341 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18341/])
HADOOP-17050 S3A to support additional token issuers (github: rev 
ac5d899d40d7b50ba73c400a708f59fb128e6e30)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java


> S3A to support additional token issuers
> ---
>
> Key: HADOOP-17050
> URL: https://issues.apache.org/jira/browse/HADOOP-17050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> In 
> {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} 
> the {{createDelegationToken}} should return a list of tokens.
> With this functionality, the {{AbstractDelegationTokenBinding}} can get two 
> different tokens at the same time.
> {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to 
> retrieve secrets and lookup delegation tokens (use the public API for 
> secretmanager in hadoop)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.

2020-06-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128545#comment-17128545
 ] 

Hudson commented on HADOOP-17047:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18338 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18338/])
HADOOP-17047. TODO comment exist in trunk while related issue (liuml07: rev 
0c25131ca430fcd6bf0f2c77dc01f027b92a9f4f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


> TODO comments exist in trunk while the related issues are already fixed.
> 
>
> Key: HADOOP-17047
> URL: https://issues.apache.org/jira/browse/HADOOP-17047
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Rungroj Maipradit
>Assignee: Rungroj Maipradit
>Priority: Trivial
> Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17047.001.patch, HADOOP-17047.001.patch, 
> HADOOP-17047.002.patch, HADOOP-17047.003.patch
>
>
> In a research project, we analyzed the source code of Hadoop looking for 
> comments with on-hold SATDs (self-admitted technical debt) that could be 
> fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. 
> If this blocking issue is already resolved, the related todo can be 
> implemented (or sometimes it is already implemented, but the comment is left 
> in the code causing confusions). As we found a few instances of these in 
> Hadoop, we decided to collect them in a ticket, so they are documented and 
> can be addressed sooner or later.
> A list of code comments that mention already closed issues.
>  * A code comment suggests making the setJobConf method deprecated along with 
> a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, 
> but the method is still not annotated as deprecated.
> {code:java}
>  /**
>* This code is to support backward compatibility and break the compile  
>* time dependency of core on mapred.
>* This should be made deprecated along with the mapred package 
> HADOOP-1230. 
>* Should be removed when mapred package is removed.
>*/ {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88]
>  * A comment mentions that the return type of the getDefaultFileSystem method 
> should be changed to AFS when HADOOP-6223 is completed.
>  Indeed, this change was done in the related commit of HADOOP-6223: 
> ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)]
>  Thus, the comment could be removed.
> {code:java}
> @InterfaceStability.Unstable /* return type will change to AFS once
> HADOOP-6223 is completed */
> {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17059) ArrayIndexOfboundsException in ViewFileSystem#listStatus

2020-06-08 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128495#comment-17128495
 ] 

Hudson commented on HADOOP-17059:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18337 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18337/])
HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
(liuml07: rev 9f242c215e1969ffec2fa2e24e65edc712097641)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java


> ArrayIndexOfboundsException in ViewFileSystem#listStatus
> 
>
> Key: HADOOP-17059
> URL: https://issues.apache.org/jira/browse/HADOOP-17059
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HADOOP-17059.001.patch
>
>
> In Viewfilesystem#listStatus , we get groupnames of ugi , If groupnames 
> doesn't exists  it will throw AIOBE
> {code:java}
> else {
>   result[i++] = new FileStatus(0, true, 0, 0,
> creationTime, creationTime, PERMISSION_555,
> ugi.getShortUserName(), ugi.getGroupNames()[0],
> new Path(inode.fullPath).makeQualified(
> myUri, null));
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17029) ViewFS does not return correct user/group and ACL

2020-06-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127129#comment-17127129
 ] 

Hudson commented on HADOOP-17029:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18333 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18333/])
HADOOP-17029. Return correct permission and owner for listing on (github: rev 
e7dd02768b658b2a1f216fbedc65938d9b6ca6e9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewfsFileStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


> ViewFS does not return correct user/group and ACL
> -
>
> Key: HADOOP-17029
> URL: https://issues.apache.org/jira/browse/HADOOP-17029
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Reporter: Abhishek Das
>Assignee: Abhishek Das
>Priority: Major
>
> When doing ls on a mount point parent, the returned user/group ACL is 
> incorrect. It always showing the user and group being current user, with some 
> arbitrary ACL. Which could misleading any application depending on this API.
> cc [~cliang] [~virajith] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17125644#comment-17125644
 ] 

Hudson commented on HADOOP-17056:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18326 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18326/])
HADOOP-17056. Addendum patch: Fix typo (aajisaka: rev 
5157118bd7f3448949da885e323c163828c35aee)
* (edit) dev-support/docker/Dockerfile
* (edit) dev-support/docker/Dockerfile_aarch64


> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-addendum.01.patch, HADOOP-17056-test-01.patch, 
> HADOOP-17056-test-02.patch, HADOOP-17056-test-03.patch, HADOOP-17056.01.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17062) Fix shelldocs path in Jenkinsfile

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17125336#comment-17125336
 ] 

Hudson commented on HADOOP-17062:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18325 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18325/])
HADOOP-17062. Fix shelldocs path in Jenkinsfile (#2049) (github: rev 
704409d53bf7ebf717a3c2e988ede80f623bbad3)
* (edit) Jenkinsfile


> Fix shelldocs path in Jenkinsfile
> -
>
> Key: HADOOP-17062
> URL: https://issues.apache.org/jira/browse/HADOOP-17062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
>
> Shelldocs check is not enabled in the precommit jobs.
> |{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
> 1s{color}|{color:#FF}Shelldocs was not available.{color}|
> Console log 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console
> {noformat}
> WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8.
> executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does 
> not exist.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16568) S3A FullCredentialsTokenBinding fails if local credentials are unset

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17125100#comment-17125100
 ] 

Hudson commented on HADOOP-16568:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18324 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18324/])
HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials 
(github: rev 40d63e02f04fb7477e25dd8ef4533da27a4229e3)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/FullCredentialsTokenBinding.java


> S3A FullCredentialsTokenBinding fails if local credentials are unset
> 
>
> Key: HADOOP-16568
> URL: https://issues.apache.org/jira/browse/HADOOP-16568
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
>
> Not sure how this slipped by the automated tests, but it is happening on my 
> CLI.
> # FullCredentialsTokenBinding fails on startup if there are now AWS keys in 
> the auth chain
> # because it tries to load them in serviceStart, not deployUnbonded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14566) Add seek support for SFTP FileSystem

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124854#comment-17124854
 ] 

Hudson commented on HADOOP-14566:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18323 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18323/])
HADOOP-14566. Add seek support for SFTP FileSystem. (#1999) (github: rev 
97c98ce531ccb27581cbb10260d7307b0ccd199c)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContractTestBase.java
* (add) hadoop-common-project/hadoop-common/src/test/resources/contract/sftp.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/sftp/SFTPContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPInputStream.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/sftp/TestSFTPContractSeek.java


> Add seek support for SFTP FileSystem
> 
>
> Key: HADOOP-14566
> URL: https://issues.apache.org/jira/browse/HADOOP-14566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Azhagu Selvan SP
>Assignee: Mikhail Pryakhin
>Priority: Minor
> Fix For: 3.3.1
>
> Attachments: HADOOP-14566.001.patch, HADOOP-14566.patch
>
>
> This patch adds seek() method implementation for SFTP FileSystem and a unit 
> test for the same



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124790#comment-17124790
 ] 

Hudson commented on HADOOP-17056:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18322 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18322/])
HADOOP-17056. shelldoc fails in hadoop-common. (#2045) (github: rev 
9c290c08db4361de29f392b0569312c2623b8321)
* (edit) dev-support/docker/Dockerfile_aarch64
* (edit) dev-support/bin/yetus-wrapper
* (edit) dev-support/docker/Dockerfile


> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch, HADOOP-17056.01.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch

2020-06-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124241#comment-17124241
 ] 

Hudson commented on HADOOP-16828:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18319/])
HADOOP-16828. Zookeeper Delegation Token Manager fetch sequence number (xyao: 
rev 6288e15118fab65a9a1452898e639313c6996769)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java


> Zookeeper Delegation Token Manager fetch sequence number by batch
> -
>
> Key: HADOOP-16828
> URL: https://issues.apache.org/jira/browse/HADOOP-16828
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen 
> Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, 
> Screen Shot 2020-01-25 at 2.25.24 PM.png
>
>
> Currently in ZKDelegationTokenSecretManager.java the seq number is 
> incremented by 1 each time there is a request for creating new token. This 
> will need to send traffic to Zookeeper server. With multiple managers 
> running, there is data contention going on. Also, since the current logic of 
> incrementing is using tryAndSet which is optimistic concurrency control 
> without locking. This data contention is having performance degradation when 
> the secret manager are under volume of traffic.
> The change here is to fetching this seq number by batch instead of 1, which 
> will reduce the traffic sent to ZK and make many operations inside ZK secret 
> manager's memory.
> After putting this into production we saw huge improvement to the RPC 
> processing latency of get delegationtoken calls. Also, since ZK takes less 
> traffic in this way. Other write calls, like renew and cancel delegation 
> tokens are benefiting from this change.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17016) Adding Common Counters in ABFS

2020-06-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124137#comment-17124137
 ] 

Hudson commented on HADOOP-17016:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18316 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18316/])
HADOOP-17016. Adding Common Counters in ABFS (#1991). (stevel: rev 
7f486f0258943f1dbda7fe5c08be4391e284df28)
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsStatistics.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsInstrumentation.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsCounters.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStatistics.java


> Adding Common Counters in ABFS
> --
>
> Key: HADOOP-17016
> URL: https://issues.apache.org/jira/browse/HADOOP-17016
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Common Counters to be added to ABFS:
> |OP_CREATE|
> |OP_OPEN|
> |OP_GET_FILE_STATUS|
> |OP_APPEND|
> |OP_CREATE_NON_RECURSIVE|
> |OP_DELETE|
> |OP_EXISTS|
> |OP_GET_DELEGATION_TOKEN|
> |OP_LIST_STATUS|
> |OP_MKDIRS|
> |OP_RENAME|
> |DIRECTORIES_CREATED|
> |DIRECTORIES_DELETED|
> |FILES_CREATED|
> |FILES_DELETED|
> |ERROR_IGNORED|
>  propose:
>  * Have an enum class to define all the counters.
>  * Have an Instrumentation class for making a MetricRegistry and adding all 
> the counters.
>  * Incrementing the counters in AzureBlobFileSystem.
>  * Integration and Unit tests to validate the counters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17052) NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort

2020-06-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121215#comment-17121215
 ] 

Hudson commented on HADOOP-17052:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18312 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18312/])
HADOOP-17052. NetUtils.connect() throws unchecked exception (github: rev 
9fe4c37c25b256d31202854066eb7e15c6335b9f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


> NetUtils.connect() throws unchecked exception (UnresolvedAddressException) 
> causing clients to abort
> ---
>
> Key: HADOOP-17052
> URL: https://issues.apache.org/jira/browse/HADOOP-17052
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.10.0, 2.9.2, 3.2.1, 3.1.3
>Reporter: Dhiraj Hegde
>Assignee: Dhiraj Hegde
>Priority: Major
> Attachments: read_failure.log, write_failure1.log, write_failure2.log
>
>
> Hadoop components are increasingly being deployed on VMs and containers. One 
> aspect of this environment is that DNS is dynamic. Hostname records get 
> modified (or deleted/recreated) as a container in Kubernetes (or even VM) is 
> being created/recreated. In such dynamic environments, the initial DNS 
> resolution request might return resolution failure briefly as DNS client 
> doesn't always get the latest records. This has been observed in Kubernetes 
> in particular. In such cases NetUtils.connect() appears to throw 
> java.nio.channels.UnresolvedAddressException.  In much of Hadoop code (like 
> DFSInputStream and DFSOutputStream), the code is designed to retry 
> IOException. However, since UnresolvedAddressException is not child of 
> IOException, no retry happens and the code aborts immediately. It is much 
> better if NetUtils.connect() throws java.net.UnknownHostException as that is 
> derived from IOException and the code will treat this as a retry-able error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7002) Wrong description of copyFromLocal and copyToLocal in documentation

2020-05-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119637#comment-17119637
 ] 

Hudson commented on HADOOP-7002:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18310 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18310/])
HADOOP-7002. Wrong description of copyFromLocal and copyToLocal in (sodonnell: 
rev 19f26a020e2e5cec2ceb28d796c63c83bc8ac506)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md


> Wrong description of copyFromLocal and copyToLocal in documentation
> ---
>
> Key: HADOOP-7002
> URL: https://issues.apache.org/jira/browse/HADOOP-7002
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jingguo Yao
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-7002.01.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> The descriptions of copyFromLocal and copyToLocal are wrong. 
> For copyFromLocal, the documentation says "Similar to put command, except 
> that the source is restricted to a local file reference." But from the source 
> code of FsShell.java, I can see that copyFromLocal is the sames as put. 
> For copyToLocal, the documentation says "Similar to get command, except that 
> the destination is restricted to a local file reference.". But from the 
> source code of FsShell.java, I can see that copyToLocal is the same as get.
> And this problem exist for both English and Chinese documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2020-05-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119478#comment-17119478
 ] 

Hudson commented on HADOOP-14698:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18309 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18309/])
HADOOP-14698. Make copyFromLocals -t option available for put as well. 
(sodonnell: rev d9e8046a1a15ab295b642b2a5e86f436c1965254)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java


> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch, HADOOP-14698.08.patch, 
> HADOOP-14698.09.patch, HADOOP-14698.10.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17055) Remove residual code of Ozone

2020-05-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119381#comment-17119381
 ] 

Hudson commented on HADOOP-17055:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18307 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18307/])
HADOOP-17055. Remove residual code of Ozone (#2039) (github: rev 
d9838f2d42eaadd0769167847af4e8f2963817fb)
* (edit) .gitignore
* (edit) hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html
* (edit) dev-support/bin/dist-layout-stitching
* (edit) hadoop-common-project/hadoop-common/src/main/conf/log4j.properties
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties
* (edit) dev-support/docker/Dockerfile


> Remove residual code of Ozone
> -
>
> Key: HADOOP-17055
> URL: https://issues.apache.org/jira/browse/HADOOP-17055
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17053) ABFS: FS initialize fails for incompatible account-agnostic Token Provider setting

2020-05-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118109#comment-17118109
 ] 

Hudson commented on HADOOP-17053:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18302 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18302/])
HADOOP-17053. ABFS: Fix Account-specific OAuth config setting parsing (github: 
rev 4c5cd751e3911e350c7437dcb28c0ed67735f635)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java


> ABFS: FS initialize fails for incompatible account-agnostic Token Provider 
> setting 
> ---
>
> Key: HADOOP-17053
> URL: https://issues.apache.org/jira/browse/HADOOP-17053
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.4.0
>
>
> When AuthType and Auth token provider configs are set for both generic and 
> account specific config, as below:
> // account agnostic
> fs.azure.account.auth.type=CUSTOM
> fs.azure.account.oauth.provider.type=ClassExtendingCustomTokenProviderAdapter
> // account specific
> fs.azure.account.auth.type.account_name=OAuth
> fs.azure.account.oauth.provider.type.account_name=ClassExtendingAccessTokenProvider
>  For account_name, OAuth with provider as ClassExtendingAccessTokenProvider 
> is expected to be in effect.
> When the token provider class is being read from the config, account agnostic 
> config setting is read first in the assumption that it can serve as default 
> if account-specific config setting is absent. But this logic leads to failure 
> when AuthType set for account specific and otherwise are different as the 
> Interface implementing the token provider is different for various Auth 
> Types. This leads to a Runtime exception when trying to create the oAuth 
> access token provider.
> This Jira is to track the fix for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16852) ABFS: Send error back to client for Read Ahead request failure

2020-05-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118095#comment-17118095
 ] 

Hudson commented on HADOOP-16852:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18301 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18301/])
HADOOP-16852: Report read-ahead error back (github: rev 
53b993e6048ffaaf98e460690211fc08efb20cf2)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBuffer.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferWorker.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/TestCachedSASToken.java


> ABFS: Send error back to client for Read Ahead request failure
> --
>
> Key: HADOOP-16852
> URL: https://issues.apache.org/jira/browse/HADOOP-16852
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Issue seen by a customer:
> The failed requests we were seeing in the AbfsClient logging actually never 
> made it out over the wire. We have found that there’s an issue with ADLS 
> passthrough and the 8 read ahead threads that ADLSv2 spawns in 
> ReadBufferManager.java. We depend on thread local storage in order to get the 
> right JWT token and those threads do not have the right information in their 
> thread local storage. Thus, when they pick up a task from the read ahead 
> queue they fail by throwing an AzureCredentialNotFoundException exception in 
> AbfsRestOperation.executeHttpOperation() where it calls 
> client.getAccessToken(). This exception is silently swallowed by the read 
> ahead threads in ReadBufferWorker.run(). As a result, every read ahead 
> attempt results in a failed executeHttpOperation(), but still calls 
> AbfsClientThrottlingIntercept.updateMetrics() and contributes to throttling 
> (despite not making it out over the wire). After the read aheads fail, the 
> main task thread performs the read with the right thread local storage 
> information and succeeds, but first sleeps for up to 10 seconds due to the 
> throttling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17054) ABFS: Fix idempotency test failures when SharedKey is set as AuthType

2020-05-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17117159#comment-17117159
 ] 

Hudson commented on HADOOP-17054:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18296/])
HADOOP-17054. ABFS: Fix test AbfsClient authentication instance (github: rev 
37b1b4799db680d4a8bd4cb389e00d044f1e4a37)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java


> ABFS: Fix idempotency test failures when SharedKey is set as AuthType
> -
>
> Key: HADOOP-17054
> URL: https://issues.apache.org/jira/browse/HADOOP-17054
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.4.0
>
>
> Idempotency related tests added as part of 
> https://issues.apache.org/jira/browse/HADOOP-17015
> create a test AbfsClient instance. This mock instance wrongly accepts valid 
> sharedKey and oauth token provider instance. This leads to test failures with 
> exceptions:
> [ERROR] 
> testRenameRetryFailureAsHTTP404(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRename)
>   Time elapsed: 9.133 s  <<< ERROR!
>  Invalid auth type: SharedKey is being used, expecting OAuth
>  at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:643)
> This Jira is to fix these tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17040) Fix intermittent failure of ITestBlockingThreadPoolExecutorService

2020-05-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113908#comment-17113908
 ] 

Hudson commented on HADOOP-17040:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18288 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18288/])
HADOOP-17040. Fix intermittent failure of (github: rev 
968531463375ebf29ba3186c13b5f8685df10d25)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestBlockingThreadPoolExecutorService.java


> Fix intermittent failure of ITestBlockingThreadPoolExecutorService
> --
>
> Key: HADOOP-17040
> URL: https://issues.apache.org/jira/browse/HADOOP-17040
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.4.0
>
>
> ITestBlockingThreadPoolExecutorService intermittently fails due to load on 
> test node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113697#comment-17113697
 ] 

Hudson commented on HADOOP-17049:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18286 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18286/])
HADOOP-17049. javax.activation-api and jakarta.activation-api define (github: 
rev 52b21de1d8857151461341fbcf2fdb38c64485b1)
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) LICENSE-binary
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/resources/org.apache.hadoop.application-classloader.properties


> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-05-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112545#comment-17112545
 ] 

Hudson commented on HADOOP-17004:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18282 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18282/])
HADOOP-17004. Fixing a formatting issue (github: rev 
d2f7133c6220ab886dc838f3ebc8d89f077c8acc)
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md


> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem

2020-05-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112168#comment-17112168
 ] 

Hudson commented on HADOOP-16900:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18280 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18280/])
HADOOP-16900. Very large files can be truncated when written through the 
(stevel: rev 29b19cd59245c8809b697b3d7d7445813a685aad)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitOperations.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AMultipartUploadSizeLimits.java
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java


> Very large files can be truncated when written through S3AFileSystem
> 
>
> Key: HADOOP-16900
> URL: https://issues.apache.org/jira/browse/HADOOP-16900
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Andrew Olson
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: s3
> Fix For: 3.4.0
>
>
> If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt 
> truncation of the S3 object will occur as the maximum number of parts in a 
> multipart upload is 10,000 as 
> [specified|https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html] by 
> the S3 API, and there is an apparent bug where this failure is not fatal 
> allowing the multipart upload operation to be marked as successfully 
> completed without being fully complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16586) ITestS3GuardFsck, others fails when run using a local metastore

2020-05-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111626#comment-17111626
 ] 

Hudson commented on HADOOP-16586:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18277 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18277/])
HADOOP-16586. ITestS3GuardFsck, others fails when run using a local (github: 
rev 0b7799bf6ed8e44a64aac87631069f2354e7c58d)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolLocal.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java


> ITestS3GuardFsck, others fails when run using a local metastore
> ---
>
> Key: HADOOP-16586
> URL: https://issues.apache.org/jira/browse/HADOOP-16586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.4.0
>
>
> Most of these tests fail if running against a local metastore with a 
> ClassCastException.
> Not sure if these tests are intended to work with dynamo only. The fix 
> (either ignore in case of other metastores or fix the test) would depend on 
> the original intent.
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
> ---
> Tests run: 12, Failures: 0, Errors: 11, Skipped: 1, Time elapsed: 34.653 s 
> <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
> testIDetectParentTombstoned(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)
>   Time elapsed: 3.237 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentTombstoned(ITestS3GuardFsck.java:190)
> testIDetectDirInS3FileInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 1.827 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectDirInS3FileInMs(ITestS3GuardFsck.java:214)
> testIDetectLengthMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
> Time elapsed: 2.819 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectLengthMismatch(ITestS3GuardFsck.java:311)
> testIEtagMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time 
> elapsed: 2.832 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIEtagMismatch(ITestS3GuardFsck.java:373)
> testIDetectFileInS3DirInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 2.752 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectFileInS3DirInMs(ITestS3GuardFsck.java:238)
> testIDetectModTimeMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 4.103 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectModTimeMismatch(ITestS3GuardFsck.java:346)
> testIDetectNoMetadataEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 3.017 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoMetadataEntry(ITestS3GuardFsck.java:113)
> testIDetectNoParentEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
> Time elapsed: 2.821 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoParentEntry(ITestS3GuardFsck.java:136)
> 

[jira] [Commented] (HADOOP-17015) ABFS: Make PUT and POST operations idempotent

2020-05-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111481#comment-17111481
 ] 

Hudson commented on HADOOP-17015:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18276 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18276/])
Hadoop-17015. ABFS: Handling Rename and Delete idempotency (github: rev 
8f78aeb2500011e568929b585ed5b0987355f88d)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/DateTimeUtils.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRename.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java


> ABFS: Make PUT and POST operations idempotent
> -
>
> Key: HADOOP-17015
> URL: https://issues.apache.org/jira/browse/HADOOP-17015
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently when a PUT or POST operation timeouts and the server has already 
> successfully executed the operation, there is no check in driver to see if 
> the operation did succeed or not and just retries the same operation again. 
> This can cause driver to through invalid user errors.
>  
> Sample scenario:
>  # Rename request times out. Though server has successfully executed the 
> operation.
>  # Driver retries rename and get source not found error.
> In the scenario, driver needs to check if rename is being retried and success 
> if source if not found, but destination is present.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110869#comment-17110869
 ] 

Hudson commented on HADOOP-17024:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18274 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18274/])
HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the (github: rev 
ce4ec7445345eb94c6741d416814a4eac319f0a6)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
> Fix For: 3.4.0
>
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17004) ABFS: Improve the ABFS driver documentation

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110831#comment-17110831
 ] 

Hudson commented on HADOOP-17004:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18273 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18273/])
HADOOP-17004. ABFS: Improve the ABFS driver documentation (github: rev 
bdbd59cfa0904860fc4ce7a2afef1e84f35b8b82)
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md


> ABFS: Improve the ABFS driver documentation
> ---
>
> Key: HADOOP-17004
> URL: https://issues.apache.org/jira/browse/HADOOP-17004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add the missing configuration/settings details
> * Mention the default vales



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109941#comment-17109941
 ] 

Hudson commented on HADOOP-17042:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18264 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18264/])
HADOOP-17042. Hadoop distcp throws 'ERROR: Tools helper (aajisaka: rev 
27601fc79ed053ce978ac18a2c5706d32e58019f)
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8143) Change distcp to have -pb on by default

2020-05-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107540#comment-17107540
 ] 

Hudson commented on HADOOP-8143:


FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18259 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18259/])
Revert "HADOOP-8143. Change distcp to have -pb on by default." (stevel: rev 
4486220bb2f6ba670cea0dbce314d816ba4c4c7f)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
Revert "HADOOP-14557. Document HADOOP-8143 (Change distcp to have -pb on 
(stevel: rev d08b9e94e36efc7e853bf982d6a93b7d5921c579)
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> Change distcp to have -pb on by default
> ---
>
> Key: HADOOP-8143
> URL: https://issues.apache.org/jira/browse/HADOOP-8143
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Dave Thompson
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-8143.1.patch, HADOOP-8143.2.patch, 
> HADOOP-8143.3.patch
>
>
> We should have the preserve blocksize (-pb) on in distcp by default.
> checksum which is on by default will always fail if blocksize is not the same.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14557) Document HADOOP-8143 (Change distcp to have -pb on by default)

2020-05-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107541#comment-17107541
 ] 

Hudson commented on HADOOP-14557:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18259 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18259/])
Revert "HADOOP-14557. Document HADOOP-8143 (Change distcp to have -pb on 
(stevel: rev d08b9e94e36efc7e853bf982d6a93b7d5921c579)
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm


> Document  HADOOP-8143  (Change distcp to have -pb on by default)
> 
>
> Key: HADOOP-14557
> URL: https://issues.apache.org/jira/browse/HADOOP-14557
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Bharat Viswanadham
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14557.02.patch, HADOOP-14557.patch
>
>
> HADOOP-8143 is an incompatible change. I think it deserves an update in the 
> distcp doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-14 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107499#comment-17107499
 ] 

Hudson commented on HADOOP-17036:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18258 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18258/])
HADOOP-17036. TestFTPFileSystem failing as ftp server dir already (github: rev 
017d24e9703e9447f88ba94df3a8aa0800de770b)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java


> TestFTPFileSystem failing as ftp server dir already exists
> --
>
> Key: HADOOP-17036
> URL: https://issues.apache.org/jira/browse/HADOOP-17036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Mikhail Pryakhin
>Priority: Minor
> Fix For: 3.4.0
>
>
> TestFTPFileSystem failing as the test dir exists.
> need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-13 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106602#comment-17106602
 ] 

Hudson commented on HADOOP-14254:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18253 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18253/])
HADOOP-14254. Add a Distcp option to preserve Erasure Coding attributes. 
(ayushsaxena: rev c757cb61ebc9e69d9f6f143da91189b9f0517ee9)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableDirectoryCreateCommand.java
* (edit) hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListingFileStatus.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java


> Add a Distcp option to preserve Erasure Coding attributes
> -
>
> Key: HADOOP-14254
> URL: https://issues.apache.org/jira/browse/HADOOP-14254
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, 
> HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, 
> HDFS-11472.001.patch
>
>
> Currently Distcp does not preserve the erasure coding attributes properly. I 
> propose we add a "-pe" switch to ensure erasure coded files at source are 
> copied as erasure coded files at destination.
> For example, if the src cluster has the following directories and files that 
> are copied to dest cluster
> hdfs://src/ root directory is replicated
> hdfs://src/foo erasure code enabled directory
> hdfs://src/foo/bar erasure coded file
> after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure 
> coded. 
> It may be useful to add such capability. One potential use is for disaster 
> recovery. The other use is for out-of-place cluster upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16916) ABFS: Delegation SAS generator for integration with Ranger

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105679#comment-17105679
 ] 

Hudson commented on HADOOP-16916:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18245 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18245/])
HADOOP-16916: ABFS: Delegation SAS generator for integration with Ranger 
(tmarq: rev b214bbd2d92a0c02b71d352dba85f3b87317933c)
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockDelegationSASTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/constants/TestConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/MockSASTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/ServiceSASGenerator.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/DelegationSASGenerator.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemAuthorization.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/SASGenerator.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCheckAccess.java
* (edit) hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/extensions/SASTokenProvider.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CachedSASToken.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/TestCachedSASToken.java


> ABFS: Delegation SAS generator for integration with Ranger
> --
>
> Key: HADOOP-16916
> URL: https://issues.apache.org/jira/browse/HADOOP-16916
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Minor
> Fix For: 3.3.1
>
> Attachments: HADOOP-16916.001.patch
>
>
> HADOOP-16730 added support for Shared Access Signatures (SAS).  Azure Data 
> Lake Storage Gen2 supports a new SAS type known as User Delegation SAS.  This 
> Jira tracks an update to the ABFS driver that will include a Delegation SAS 
> generator and tests to validate that this SAS type is working correctly with 
> the driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17035) Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents

2020-05-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105560#comment-17105560
 ] 

Hudson commented on HADOOP-17035:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18241 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18241/])
HADOOP-17035. fixed typos (timeout, interruped) (#2007) (github: rev 
a3f945fb8466d461d42ce60f0bc12c96fbb2db23)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java


> Trivial typo(s) which are 'timout', 'interruped' in comment, LOG and documents
> --
>
> Key: HADOOP-17035
> URL: https://issues.apache.org/jira/browse/HADOOP-17035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sungpeo Kook
>Assignee: Sungpeo Kook
>Priority: Trivial
> Fix For: 3.4.0
>
>
> There are typos 'Interruped' and 'timout'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17033) Update commons-codec from 1.11 to 1.14

2020-05-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104624#comment-17104624
 ] 

Hudson commented on HADOOP-17033:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18234/])
HADOOP-17033. Update commons-codec from 1.11 to 1.14. (#2000) (github: rev 
bd342bef64e5b7219c6b08e585e2b122d06793e0)
* (edit) hadoop-project/pom.xml


> Update commons-codec from 1.11 to 1.14
> --
>
> Key: HADOOP-17033
> URL: https://issues.apache.org/jira/browse/HADOOP-17033
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.4.0
>
>
> We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. 
> We should update it if it's not too much of a hassle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104105#comment-17104105
 ] 

Hudson commented on HADOOP-16768:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18232 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18232/])
HADOOP-16768. SnappyCompressor test cases wrongly assume that the (github: rev 
328eae9a146b2dd9857a17a0db6fcddb1de23a0d)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java


> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> 

[jira] [Commented] (HADOOP-17027) Add tests for reading fair call queue capacity weight configs

2020-05-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102124#comment-17102124
 ] 

Hudson commented on HADOOP-17027:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18228 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18228/])
HADOOP-17027. Add tests for reading fair call queue capacity weight (liuml07: 
rev e9e1ead089c0b9f5f1788361329a64fec6561352)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java


> Add tests for reading fair call queue capacity weight configs
> -
>
> Key: HADOOP-17027
> URL: https://issues.apache.org/jira/browse/HADOOP-17027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17027.001.patch
>
>
> This is to add more tests for changes introduced in 
> https://issues.apache.org/jira/browse/HADOOP-17010
> specifically adding tests for more comprehensive flow by testing 
> CallQueueManager reading conf and constructs the right FairCallQueue



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17018) Intermittent failing of ITestAbfsStreamStatistics in ABFS

2020-05-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101576#comment-17101576
 ] 

Hudson commented on HADOOP-17018:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18226 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18226/])
HADOOP-17018. Intermittent failing of ITestAbfsStreamStatistics in ABFS 
(github: rev 192cad9ee24779cbd7735fdf9da0fba90255d546)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java


> Intermittent failing of ITestAbfsStreamStatistics in ABFS
> -
>
> Key: HADOOP-17018
> URL: https://issues.apache.org/jira/browse/HADOOP-17018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Minor
> Fix For: 3.3.1
>
>
> There are intermittent failures of a test inside ITestAbfsStreamStatistics in 
> ABFS.
> Did consecutive runs of the test and failure seemed random. Stack Trace in 
> the comments.
> Propose:
> - Change the assertion of the test for it to be passed, Since the production 
> code seems fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17025) Fix invalid metastore configuration in S3GuardTool tests

2020-05-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101337#comment-17101337
 ] 

Hudson commented on HADOOP-17025:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18225 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18225/])
HADOOP-17025. Fix invalid metastore configuration in S3GuardTool tests. 
(github: rev 99840aaba662e7bb6187206b74e35132102a3b38)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java


> Fix invalid metastore configuration in S3GuardTool tests
> 
>
> Key: HADOOP-17025
> URL: https://issues.apache.org/jira/browse/HADOOP-17025
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> The WARN message shown in S3GuardTool tests implies mismatch between property 
> name and the value.
> {noformat}
> 2020-05-02 11:57:44,266 [setup] WARN  conf.Configuration 
> (Configuration.java:getBoolean(1694)) - Invalid value for boolean: 
> org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore, choose default value: 
> false for fs.s3a.metadatastore.authoritative
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS

2020-04-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097003#comment-17097003
 ] 

Hudson commented on HADOOP-17011:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18204 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18204/])
HADOOP-17011. Tolerate leading and trailing spaces in fs.defaultFS. (liuml07: 
rev 263c76b678275dfff867415c71ba9dc00a9235ef)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsCommand.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* (edit) 
hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/ClusterSummarizer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeUtils.java


> Tolerate leading and trailing spaces in fs.defaultFS
> 
>
> Key: HADOOP-17011
> URL: https://issues.apache.org/jira/browse/HADOOP-17011
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch, 
> HADOOP-17011-003.patch, HADOOP-17011-004.patch, HADOOP-17011-005.patch
>
>
> *Problem:*
> Currently, `getDefaultUri` is using `conf.get` to get the value of 
> `fs.defaultFS`, which means that the trailing whitespace after a valid URI 
> won’t be removed and could stop namenode and datanode from starting up.
>  
> *How to reproduce (Hadoop-2.8.5):*
> Set the configuration
> {code:java}
> 
>  fs.defaultFS
>  hdfs://localhost:9000 
> {code}
> In core-site.xml (there is a whitespace after 9000) and start HDFS.
> Namenode and datanode won’t start and the log message is:
> {code:java}
> 2020-04-23 11:09:48,198 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.lang.IllegalArgumentException: Illegal character in authority at index 
> 7: hdfs://localhost:9000 
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
> Caused by: java.net.URISyntaxException: Illegal character in authority at 
> index 7: hdfs://localhost:9000 
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.parseAuthority(URI.java:3186)
> at java.net.URI$Parser.parseHierarchical(URI.java:3097)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.(URI.java:588)
> at java.net.URI.create(URI.java:850)
> ... 5 more
> {code}
>  
> *Solution:*
> Use `getTrimmed` instead of `get` for `fs.defaultFS`:
> {code:java}
> public static URI getDefaultUri(Configuration conf) {
>   URI uri =
> URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS)));
>   if (uri.getScheme() == null) {
> throw new IllegalArgumentException("No scheme in default FS: " + uri);
>   }
>   return uri;
> }
> {code}
> I have submitted a patch for trunk about this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16957) NodeBase.normalize doesn't removing all trailing slashes.

2020-04-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17096604#comment-17096604
 ] 

Hudson commented on HADOOP-16957:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18203 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18203/])
HADOOP-16957. NodeBase.normalize doesn't removing all trailing slashes. 
(ayushsaxena: rev 6bdab3723eff78c79aa48c24aad87373b983fe6c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestClusterTopology.java


> NodeBase.normalize doesn't removing all trailing slashes.
> -
>
> Key: HADOOP-16957
> URL: https://issues.apache.org/jira/browse/HADOOP-16957
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16957-01.patch
>
>
> As per javadoc 
> /** Normalize a path by stripping off any trailing {@link #PATH_SEPARATOR}
> But it removes only one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17010) Add queue capacity weights support in FairCallQueue

2020-04-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094926#comment-17094926
 ] 

Hudson commented on HADOOP-17010:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18195 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18195/])
HADOOP-17010. Add queue capacity support for FairCallQueue (#1977) (github: rev 
4202750040f91f8dcc218ecc7d3ccf81a8e68b2a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java


> Add queue capacity weights support in FairCallQueue
> ---
>
> Key: HADOOP-17010
> URL: https://issues.apache.org/jira/browse/HADOOP-17010
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HADOOP-17010.001.patch, HADOOP-17010.002.patch
>
>
> Right now in FairCallQueue all subqueues share the same capacity by evenly 
> distributing total capacity. This requested feature is to make subqueues able 
> to have different queue capacity where more important queues can have more 
> capacity, thus less queue overflow and client backoffs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build

2020-04-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092572#comment-17092572
 ] 

Hudson commented on HADOOP-17007:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18185 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18185/])
HADOOP-17007. hadoop-cos fails to build. Contributed by Yang Yu. (ayushsaxena: 
rev 85516a8af7ee990e7eb00fab6d9f050df2a8170e)
* (edit) hadoop-cloud-storage-project/hadoop-cos/pom.xml


> hadoop-cos fails to build
> -
>
> Key: HADOOP-17007
> URL: https://issues.apache.org/jira/browse/HADOOP-17007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/cos
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yang Yu
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-17007.001.patch
>
>
> Found the following compilation error in a PR precommit. The failure doesn't 
> seem related to the PR itself. Cant' reproduce locally though.
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt
> {noformat}
> [INFO] Apache Hadoop Tencent COS Support .. FAILURE [  0.074 
> s]
> [INFO] Apache Hadoop Cloud Storage  SKIPPED
> [INFO] Apache Hadoop Cloud Storage Project  SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 17:31 min
> [INFO] Finished at: 2020-04-22T07:37:51+00:00
> [INFO] Final Memory: 192M/1714M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies 
> (package) on project hadoop-cos: Artifact has not been packaged yet. When 
> used on reactor artifact, copy should be executed after packaging: see 
> MDEP-187. -> [Help 1]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16054) Update Dockerfile to use Bionic

2020-04-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092336#comment-17092336
 ] 

Hudson commented on HADOOP-16054:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18182 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18182/])
HADOOP-16054. Update Dockerfile to use Bionic (#1966) (github: rev 
81d8b71534645a2109a037115fb955351edfbf64)
* (edit) dev-support/docker/Dockerfile


> Update Dockerfile to use Bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092124#comment-17092124
 ] 

Hudson commented on HADOOP-16886:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18181 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18181/])
HADOOP-16886. Add hadoop.http.idle_timeout.ms to core-default.xml. 
(ayushsaxena: rev ef9a6e775c136b2a591a76d5e34d07974a356e0d)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not

2020-04-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091068#comment-17091068
 ] 

Hudson commented on HADOOP-17002:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18177 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18177/])
HADOOP-17002. ABFS: Adding config to determine if the account is HNS (github: 
rev 30ef8d0f1a1463931fe581a46c739dad4c8260e4)
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums/Trilean.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums/package-info.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TrileanTests.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/TrileanConversionException.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java


> ABFS: Avoid storage calls to check if the account is HNS enabled or not
> ---
>
> Key: HADOOP-17002
> URL: https://issues.apache.org/jira/browse/HADOOP-17002
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> Each time an FS instance is created a Getacl call is made. If the call fails 
> with 400 Bad request, the account is determined to be a non-HNS account. 
> Recommendation is to create a config and be able to avoid store calls to 
> determine account HNS status,
> If config is available, use that to determine account HNS status. If config 
> is not present in core-site, default behaviour will be calling getAcl. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16914) Adding Output Stream Counters in ABFS

2020-04-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090593#comment-17090593
 ] 

Hudson commented on HADOOP-16914:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18175 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18175/])
HADOOP-16914 Adding Output Stream Counters in ABFS (#1899) (github: rev 
459eb2ad6d5bc6b21462e728fb334c6e30e14c39)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsOutputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatistics.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java


> Adding Output Stream Counters in ABFS
> -
>
> Key: HADOOP-16914
> URL: https://issues.apache.org/jira/browse/HADOOP-16914
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> AbfsOutputStream does not have any counters that can be populated or referred 
> to when needed for finding bottlenecks in that area.
> purpose:
>  * Create an interface and Implementation class for all the AbfsOutputStream 
> counters.
>  * populate the counters in AbfsOutputStream in appropriate places.
>  * Override the toString() to see counters in logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17001) The suffix name of the unified compression class

2020-04-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089997#comment-17089997
 ] 

Hudson commented on HADOOP-17001:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18173/])
HADOOP-17001. The suffix name of the unified compression class. (liuml07: rev 
af85971a5842e47cf94b6e48de3091a8723b0eb3)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/PassthroughCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DefaultCodec.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CodecConstants.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/Lz4Codec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/ZStandardCodec.java


> The suffix name of the unified compression class
> 
>
> Key: HADOOP-17001
> URL: https://issues.apache.org/jira/browse/HADOOP-17001
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.2.1
>Reporter: bianqi
>Assignee: bianqi
>Priority: Major
> Attachments: HADOOP-17001.003.patch, HADOOP-17001.004.patch, 
> HADOOP-17001.005.patch
>
>
> The suffix name of the unified compression class,I think the suffix name in 
> the compression class should be extracted into a constant class, which is 
> helpful for developers to understand the structure of the compression class 
> as a whole.
> {quote}public static final String OPT_EXTENSION =
>  "io.compress.passthrough.extension";
> /**
>  * This default extension is here so that if no extension has been defined,
>  * some value is still returned: \{@value}..
>  */
> public static final String DEFAULT_EXTENSION = ".passthrough";
> private Configuration conf;
> private String extension = DEFAULT_EXTENSION;
> public PassthroughCodec() {
> }
> {quote}
> The above code, the use of constants is a bit messy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16922) ABFS: Change in User-Agent header

2020-04-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089015#comment-17089015
 ] 

Hudson commented on HADOOP-16922:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18171/])
HADOOP-16922. ABFS: Change User-Agent header (#1938) (github: rev 
264e49c8f2cfd15826655bbc1847f378f60ad8c7)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java


> ABFS: Change in User-Agent header
> -
>
> Key: HADOOP-16922
> URL: https://issues.apache.org/jira/browse/HADOOP-16922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.4.0
>
>
> * Add more inforrmation to the User-Agent header like cluster name, cluster 
> type, java vendor etc.
> * Add APN/1.0 in the begining



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2020-04-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089014#comment-17089014
 ] 

Hudson commented on HADOOP-16965:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18171/])
HADOOP-16965. Refactor abfs stream configuration. (#1956) (github: rev 
8031c66295b530dcaae9e00d4f656330bc3b3952)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStreamContext.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsStreamContext.java


> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16953) HADOOP-16953. tune s3guard disabled warnings

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087781#comment-17087781
 ] 

Hudson commented on HADOOP-16953:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18167 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18167/])
HADOOP-16953. tuning s3guard disabled warnings (#1962) (github: rev 
93b662db47aa4e9bd0e2cecabddf949c0fea19f2)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java


> HADOOP-16953. tune s3guard disabled warnings
> 
>
> Key: HADOOP-16953
> URL: https://issues.apache.org/jira/browse/HADOOP-16953
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> config option org.apache.hadoop.fs.s3a.s3guard.disabled.warn.level should be  
> fs.s3a.s3guard.disabled.warn.level
> need to fix that and add the existing one as deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16986) s3a to not need wildfly on the classpath

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087749#comment-17087749
 ] 

Hudson commented on HADOOP-16986:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18166 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18166/])
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948) (github: rev 
42711081e3cba5835493b5cbedc23d16dfea7667)
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestWildflyAndOpenSSLBinding.java
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/NetworkBinding.java


> s3a to not need wildfly on the classpath
> 
>
> Key: HADOOP-16986
> URL: https://issues.apache.org/jira/browse/HADOOP-16986
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> see : https://github.com/apache/hadoop/pull/1948 and HADOOP-16855
> * remove a hard dependency on wildfly.jar being on the classpath for S3; it's 
> used if present, but handled if not
> * even if openssl is requested
> * and NPEs are caught and swallowed in case wildfly 1.0.4.Final ever gets on 
> the classpath again



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087588#comment-17087588
 ] 

Hudson commented on HADOOP-16959:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18165 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18165/])
HADOOP-16959. Resolve hadoop-cos dependency conflict. Contributed by 
(sammichen: rev 82ff7bc9abc8f3ad549db898953d98ef142ab02d)
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNFileReadTask.java
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/COSCredentialProviderList.java
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/SimpleCredentialProvider.java
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/EnvironmentVariableCredentialsProvider.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/dev-support/findbugs-exclude.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/AbstractCOSCredentialsProvider.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNUtils.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
* (delete) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/EnvironmentVariableCredentialProvider.java
* (edit) hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/SimpleCredentialsProvider.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/CosNativeFileSystemStore.java
* (edit) hadoop-cloud-storage-project/hadoop-cos/pom.xml
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/test/java/org/apache/hadoop/fs/cosn/TestCosCredentials.java
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/BufferPool.java
* (add) 
hadoop-cloud-storage-project/hadoop-cos/src/main/java/org/apache/hadoop/fs/cosn/auth/COSCredentialsProviderList.java


> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) TestFileContextResolveAfs#testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087181#comment-17087181
 ] 

Hudson commented on HADOOP-16971:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18164 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18164/])
HADOOP-16971. TestFileContextResolveAfs#testFileContextResolveAfs (ayushsaxena: 
rev 79e03fb622f824053df6cc4c973d6723659adc46)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileContextResolveAfs.java


> TestFileContextResolveAfs#testFileContextResolveAfs creates dangling link and 
> fails for subsequent runs
> ---
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Fix For: 3.4.0
>
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086566#comment-17086566
 ] 

Hudson commented on HADOOP-16944:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18162/])
HADOOP-16944. Use Yetus 0.12.0 in GitHub PR (#1917) (github: rev 
5576915236aba172cb5ab49b43111661590058af)
* (edit) Jenkinsfile


> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086097#comment-17086097
 ] 

Hudson commented on HADOOP-16972:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18157 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18157/])
HADOOP-16972. Ignore AuthenticationFilterInitializer for KMSWebServer. (github: 
rev ac40daece17e9a6339927dbcadab76034bd7882c)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebServer.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 3.3.0
>
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085871#comment-17085871
 ] 

Hudson commented on HADOOP-16951:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18155/])
HADOOP-16951: Tidy Up Text and ByteWritables Classes. (github: rev 
eca05917d60f8a06f2a04815db818a7d3afbd2ce)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >