[jira] [Commented] (HADOOP-16867) [thirdparty] Add shaded JargerTracer

2020-02-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041600#comment-17041600
 ] 

Hudson commented on HADOOP-16867:
-

SUCCESS: Integrated in Jenkins build Hadoop-thirdparty-trunk-commit #5 (See 
[https://builds.apache.org/job/Hadoop-thirdparty-trunk-commit/5/])
HADOOP-16867. [thirdparty] Add shaded JaegerTracer (#5) (github: rev 
ccb7ecae5f05765d410645fbdea9ff31698d647d)
* (add) hadoop-shaded-jaeger/pom.xml
* (edit) pom.xml


> [thirdparty] Add shaded JargerTracer
> 
>
> Key: HADOOP-16867
> URL: https://issues.apache.org/jira/browse/HADOOP-16867
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Add artifact {{hadoop-shaded-jaeger}} to {{hadoop-thirdparty}} for 
> OpenTracing work in HADOOP-15566.
> CC [~weichiu]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041508#comment-17041508
 ] 

Hudson commented on HADOOP-16869:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17972 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17972/])
HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn (github: rev 
7f35676f90f0730b8c9844cf00ee5a943f80d48d)
* (edit) hadoop-project/pom.xml


> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16869:
---
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk, branch-3.2, and branch-3.1.

> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #1855: HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure

2020-02-20 Thread GitBox
aajisaka merged pull request #1855: HADOOP-16869. Upgrade findbugs-maven-plugin 
to 3.0.5 to fix mvn findbugs:findbugs failure
URL: https://github.com/apache/hadoop/pull/1855
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1855: HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure

2020-02-20 Thread GitBox
aajisaka commented on issue #1855: HADOOP-16869. Upgrade findbugs-maven-plugin 
to 3.0.5 to fix mvn findbugs:findbugs failure
URL: https://github.com/apache/hadoop/pull/1855#issuecomment-589479655
 
 
   The shaded client build error is not related the patch.
   
https://builds.apache.org/job/hadoop-multibranch/job/PR-1855/1/artifact/out/patch-shadedclient.txt
   ```
   [ERROR] Failed to execute goal 
org.xolstice.maven.plugins:protobuf-maven-plugin:0.5.1:compile 
(src-compile-protoc) on project hadoop-mapreduce-client-common: An error 
occurred while invoking protoc. Error while executing process. Cannot run 
program 
"/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1855/src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/target/protoc-plugins/protoc-3.7.1-linux-x86_64.exe":
 error=11, Resource temporarily unavailable -> [Help 1]
   ```
   Merging this. Thanks @iwasakims and @ayushtkn 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-20 Thread GitBox
iwasakims commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads to 
OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#issuecomment-589476931
 
 
   Thanks. I merged this. trying to backport to relevant branches.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-20 Thread GitBox
iwasakims merged pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to 
OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16876) KMS delegation tokens are memory expensive

2020-02-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16876:
-
Description: 
We recently saw a number of users reporting high memory consumption in KMS.

Part of the reason being HADOOP-14445. Without that, the number of kms 
delegation tokens that zookeeper stores is proportional to the number of KMS 
servers.

There are two problems:
(1) it exceeds zookeeper jute buffer length and operations fail.
(2) KMS uses more heap memory to store KMS DTs.

But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap dump 
from KMS, the majority of the heap is occupied by znode and KMS DT objects. 
With the growing number of encrypted clusters and use cases, this is 
increasingly a problem our users encounter.

  was:
We recently saw a number of users reporting high memory consumption in KMS.

Part of the reason being HADOOP-14445. Without that, the number of kms 
delegation tokens that zookeeper stores is proportional to the number of KMS 
servers.

There are two problems:
(1) it exceeds zookeeper jute buffer length and operations fail.
(2) KMS use more heap memory to store KMS DTs.

But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap dump 
from KMS, the majority of the heap is occupied by znode and KMS DT objects. 
With the growing number of encrypted clusters and use cases, this is 
increasingly a problem our users encounter.


> KMS delegation tokens are memory expensive
> --
>
> Key: HADOOP-16876
> URL: https://issues.apache.org/jira/browse/HADOOP-16876
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: Screen Shot 2020-02-20 at 5.04.12 PM.png
>
>
> We recently saw a number of users reporting high memory consumption in KMS.
> Part of the reason being HADOOP-14445. Without that, the number of kms 
> delegation tokens that zookeeper stores is proportional to the number of KMS 
> servers.
> There are two problems:
> (1) it exceeds zookeeper jute buffer length and operations fail.
> (2) KMS uses more heap memory to store KMS DTs.
> But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap 
> dump from KMS, the majority of the heap is occupied by znode and KMS DT 
> objects. With the growing number of encrypted clusters and use cases, this is 
> increasingly a problem our users encounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16876) KMS delegation tokens are memory expensive

2020-02-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16876:
-
Description: 
We recently saw a number of users reporting high memory consumption in KMS.

Part of the reason being HADOOP-14445. Without that, the number of kms 
delegation tokens that zookeeper stores is proportional to the number of KMS 
servers.

There are two problems:
(1) it exceeds zookeeper jute buffer length and operations fail.
(2) KMS use more heap memory to store KMS DTs.

But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap dump 
from KMS, the majority of the heap is occupied by znode and KMS DT objects. 
With the growing number of encrypted clusters and use cases, this is 
increasingly a problem our users encounter.

  was:
We recently saw a number of users reporting memory consumption in KMS.

Part of the reason being HADOOP-14445. Without that, the number of kms 
delegation tokens that zookeeper stores is proportional to the number of KMS 
servers.

There are two problems:
(1) it exceeds zookeeper jute buffer length and operations fail.
(2) KMS use more heap memory to store KMS DTs.

But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap dump 
from KMS, the majority of the heap is occupied by znode and KMS DT objects. 
With the growing number of encrypted clusters and use cases, this is 
increasingly a problem our users encounter.


> KMS delegation tokens are memory expensive
> --
>
> Key: HADOOP-16876
> URL: https://issues.apache.org/jira/browse/HADOOP-16876
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: Screen Shot 2020-02-20 at 5.04.12 PM.png
>
>
> We recently saw a number of users reporting high memory consumption in KMS.
> Part of the reason being HADOOP-14445. Without that, the number of kms 
> delegation tokens that zookeeper stores is proportional to the number of KMS 
> servers.
> There are two problems:
> (1) it exceeds zookeeper jute buffer length and operations fail.
> (2) KMS use more heap memory to store KMS DTs.
> But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap 
> dump from KMS, the majority of the heap is occupied by znode and KMS DT 
> objects. With the growing number of encrypted clusters and use cases, this is 
> increasingly a problem our users encounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16876) KMS delegation tokens are memory expensive

2020-02-20 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16876:


 Summary: KMS delegation tokens are memory expensive
 Key: HADOOP-16876
 URL: https://issues.apache.org/jira/browse/HADOOP-16876
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Reporter: Wei-Chiu Chuang
 Attachments: Screen Shot 2020-02-20 at 5.04.12 PM.png

We recently saw a number of users reporting memory consumption in KMS.

Part of the reason being HADOOP-14445. Without that, the number of kms 
delegation tokens that zookeeper stores is proportional to the number of KMS 
servers.

There are two problems:
(1) it exceeds zookeeper jute buffer length and operations fail.
(2) KMS use more heap memory to store KMS DTs.

But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap dump 
from KMS, the majority of the heap is occupied by znode and KMS DT objects. 
With the growing number of encrypted clusters and use cases, this is 
increasingly a problem our users encounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16876) KMS delegation tokens are memory expensive

2020-02-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16876:
-
Attachment: Screen Shot 2020-02-20 at 5.04.12 PM.png

> KMS delegation tokens are memory expensive
> --
>
> Key: HADOOP-16876
> URL: https://issues.apache.org/jira/browse/HADOOP-16876
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: Screen Shot 2020-02-20 at 5.04.12 PM.png
>
>
> We recently saw a number of users reporting memory consumption in KMS.
> Part of the reason being HADOOP-14445. Without that, the number of kms 
> delegation tokens that zookeeper stores is proportional to the number of KMS 
> servers.
> There are two problems:
> (1) it exceeds zookeeper jute buffer length and operations fail.
> (2) KMS use more heap memory to store KMS DTs.
> But even with HADOOP-14445, KMS DTs are still expensive. Looking at a heap 
> dump from KMS, the majority of the heap is occupied by znode and KMS DT 
> objects. With the growing number of encrypted clusters and use cases, this is 
> increasingly a problem our users encounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2020-02-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041350#comment-17041350
 ] 

Wei-Chiu Chuang commented on HADOOP-16206:
--

3.3.0 code freeze is scheduled in March.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation

2020-02-20 Thread GitBox
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics 
API + S3A implementation
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-589293736
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
13 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 36s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 16s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 30s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  9s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 59s |  root: The patch generated 47 new 
+ 309 unchanged - 19 fixed = 356 total (was 328)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   | -1 :x: |  findbugs  |   1m 17s |  hadoop-tools/hadoop-aws generated 13 new 
+ 0 unchanged - 0 fixed = 13 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 18s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 144m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.policySetCount
 in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int)
  At S3AInstrumentation.java:in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int)
  At S3AInstrumentation.java:[line 804] |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readExceptions
 in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException()
  At S3AInstrumentation.java:in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException()
  At S3AInstrumentation.java:[line 741] |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperations
 in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long,
 long)  At S3AInstrumentation.java:in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long,
 long)  At S3AInstrumentation.java:[line 774] |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readsIncomplete
 in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int,
 int)  At S3AInstrumentation.java:in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int,
 int)  At S3AInstrumentation.java:[line 785] |
   |  |  Increment of volatile field 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperations
 in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long,
 long)  At S3AInstrumentation.java:in 
org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long,
 long)  At S3AInstrumentation.java:[line 763] |
   |  |  Increment of 

[GitHub] [hadoop] kihwal commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-20 Thread GitBox
kihwal commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM 
due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#issuecomment-589269569
 
 
   The unit test failure is unrelated and caused by "BindException: Address 
already in use"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-20 Thread GitBox
hadoop-yetus commented on issue #1758: HDFS-15052. WebHDFS getTrashRoot leads 
to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#issuecomment-589220825
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 49s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 20s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  1s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 109m 19s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 198m 40s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1758/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1758 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 141c525de5f0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4af2556 |
   | Default Java | 1.8.0_242 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1758/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1758/5/testReport/ |
   | Max. process+thread count | 2827 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1758/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on issue #1853: HADOOP-16873 - Upgrade to Apache ZooKeeper 3.5.7

2020-02-20 Thread GitBox
ayushtkn commented on issue #1853: HADOOP-16873 - Upgrade to Apache ZooKeeper 
3.5.7
URL: https://github.com/apache/hadoop/pull/1853#issuecomment-589176640
 
 
   Jenkins hasn't run all the tests,
   We should verify all the tests, before concluding.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16866) Upgrade spotbugs' version

2020-02-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041128#comment-17041128
 ] 

Ayush Saxena commented on HADOOP-16866:
---

With 3.3.0 release near.
IMO we should choose a safer option as of now. (3.1.12)
But I don't pose any objections for other one too. 

> Upgrade spotbugs' version
> -
>
> Key: HADOOP-16866
> URL: https://issues.apache.org/jira/browse/HADOOP-16866
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Tsuyoshi Ozawa
>Priority: Minor
>
> [https://github.com/spotbugs/spotbugs/releases]
> spotbugs 4.0.0 is now released. 
>  
> We can upgrade spotbugs' version to:
> 1. 3.1.12  (conservative option)
> 2. 4.0.0 (which might includes incompatible changes, according to the 
> migration guide: [https://spotbugs.readthedocs.io/en/stable/migration.html])
>  
> Step by step approach is also acceptable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16869) mvn findbugs:findbugs fails

2020-02-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041120#comment-17041120
 ] 

Ayush Saxena commented on HADOOP-16869:
---

Thanx [~aajisaka] for the report.
I am able to reproduce this in mvn 3.6.1
Post your fix it works.


> mvn findbugs:findbugs fails
> ---
>
> Key: HADOOP-16869
> URL: https://issues.apache.org/jira/browse/HADOOP-16869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> mvn findbugs:findbugs is failing:
> {noformat}
> [ERROR] Failed to execute goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs (default-cli) on 
> project hadoop-project: Unable to parse configuration of mojo 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs for parameter 
> pluginArtifacts: Cannot assign configuration entry 'pluginArtifacts' with 
> value '${plugin.artifacts}' of type 
> java.util.Collections.UnmodifiableRandomAccessList to property of type 
> java.util.ArrayList -> [Help 1]
>  {noformat}
> We have to update the version of findbugs-maven-plugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-20 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-589159533
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
7 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | -1 :x: |  javac  |   0m 26s |  hadoop-tools_hadoop-aws generated 3 new + 
15 unchanged - 0 fixed = 18 total (was 15)  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 2 new + 30 unchanged - 0 fixed = 32 total (was 30)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 35s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  65m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 3f134164c323 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 181e6d0 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/diff-compile-javac-hadoop-tools_hadoop-aws.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/testReport/ |
   | Max. process+thread count | 425 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1739: HDFS-14668 Support Fuse with Users from multiple Security Realms

2020-02-20 Thread GitBox
jojochuang commented on issue #1739: HDFS-14668 Support Fuse with Users from 
multiple Security Realms
URL: https://github.com/apache/hadoop/pull/1739#issuecomment-589148560
 
 
   +1 the fix is deployed and verified to work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #1758: HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem objec…

2020-02-20 Thread GitBox
iwasakims commented on a change in pull request #1758: HDFS-15052. WebHDFS 
getTrashRoot leads to OOM due to FileSystem objec…
URL: https://github.com/apache/hadoop/pull/1758#discussion_r382037082
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 ##
 @@ -1345,11 +1348,21 @@ protected Response get(
 }
   }
 
-  private static String getTrashRoot(String fullPath,
-  Configuration conf) throws IOException {
-FileSystem fs = FileSystem.get(conf != null ? conf : new Configuration());
-return fs.getTrashRoot(
-new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
+  private String getTrashRoot(String fullPath) throws IOException {
+String user = UserGroupInformation.getCurrentUser().getShortUserName();
+org.apache.hadoop.fs.Path path = new org.apache.hadoop.fs.Path(fullPath);
+String parentSrc = path.isRoot() ?
+path.toUri().getPath() : path.getParent().toUri().getPath();
+EncryptionZone ez = getRpcClientProtocol().getEZForPath(parentSrc);
+org.apache.hadoop.fs.Path trashRoot;
+if (ez != null) {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(ez.getPath(), TRASH_PREFIX), user);
+} else {
+  trashRoot = new org.apache.hadoop.fs.Path(
+  new org.apache.hadoop.fs.Path(USER_HOME_PREFIX, user), TRASH_PREFIX);
+}
+return trashRoot.toUri().getPath();
 
 Review comment:
   @kihwal Thanks for the comment. I updated the code based on your suggestion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16875) S3Guard: add support for other MetadataStores

2020-02-20 Thread Rafael Acevedo (Jira)
Rafael Acevedo created HADOOP-16875:
---

 Summary: S3Guard: add support for other MetadataStores
 Key: HADOOP-16875
 URL: https://issues.apache.org/jira/browse/HADOOP-16875
 Project: Hadoop Common
  Issue Type: Wish
Affects Versions: 3.2.1
Reporter: Rafael Acevedo


Hi all,

 

Are there any plans to add other MetadataStore implementations for S3Guard? 
DynamoDB costs are too high when the read capacity/write capacity are high.

 

Maybe a Postgres/MySQL implementation is simple enough to implement and offer 
strong consistency.

Another idea is to implement a Cassandra/Scylla MetadataStore(for better write 
scalability), but we should pay attention to consistency.

 

Any thoughts?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries

2020-02-20 Thread GitBox
steveloughran commented on issue #1851: HADOOP-16858. S3Guard fsck: Add option 
to remove orphaned entries
URL: https://github.com/apache/hadoop/pull/1851#issuecomment-589059339
 
 
   looking forward to this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1844: HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP

2020-02-20 Thread GitBox
steveloughran commented on issue #1844: HADOOP-16706. ITestClientUrlScheme 
fails for accounts which don't support HTTP
URL: https://github.com/apache/hadoop/pull/1844#issuecomment-589056183
 
 
   thx, will rebase and merge ASAP


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-02-20 Thread Luca Toscano (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040978#comment-17040978
 ] 

Luca Toscano commented on HADOOP-16647:
---

Hi [~ste...@apache.org], thanks a lot for the info. Do you have a pointer to 
the azuredatalake/abfs by any chance? I am working with BigTop to see if the 
above change can be added to Hadoop 2.8.5/2.10, so any more info would be 
really appreciated. They already backported the changes made for HADOOP-14597, 
but as far as I can see on Debian 9 it is not enough even for openssl 1.1.0 
(runtime issues when hadoop tries to use crypto libs provided by openssl).

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381986574
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 ##
 @@ -39,6 +38,7 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.SASTokenProviderException;
 
 Review comment:
   next block please; imports are merge hell and we need to stay in control.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381992480
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 ##
 @@ -578,35 +585,37 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
-  public String getAbfsExternalAuthorizationClass() {
-return this.abfsExternalAuthorizationClass;
-  }
-
-  public AbfsAuthorizer getAbfsAuthorizer() throws IOException {
-String authClassName = getAbfsExternalAuthorizationClass();
-AbfsAuthorizer authorizer = null;
+  public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
+AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
+if (authType != AuthType.SAS) {
+  throw new SASTokenProviderException(String.format(
+"Invalid auth type: %s is being used, expecting SAS", authType));
+}
 
 try {
-  if (authClassName != null && !authClassName.isEmpty()) {
-@SuppressWarnings("unchecked")
-Class authClass = (Class) 
rawConfig.getClassByName(authClassName);
-authorizer = authClass.getConstructor(new Class[] 
{Configuration.class}).newInstance(rawConfig);
-LOG.trace("Initializing {}", authClassName);
-authorizer.init();
-LOG.trace("{} init complete", authClassName);
+  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
+  Class sasTokenProviderClass =
+  getClass(configKey, null, SASTokenProvider.class);
+  if (sasTokenProviderClass == null) {
 
 Review comment:
   Preconditions.checkArgument does this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381987083
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/SASGenerator.java
 ##
 @@ -0,0 +1,126 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.utils;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+import java.io.UnsupportedEncodingException;
+import java.time.*;
 
 Review comment:
   these should all be expanded


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381985040
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
 ##
 @@ -29,6 +30,8 @@
 import java.util.Locale;
 
 import com.google.common.annotations.VisibleForTesting;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.SASTokenProviderException;
+import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
 
 Review comment:
   place in next block


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381987407
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
 ##
 @@ -38,8 +38,8 @@
 import org.apache.hadoop.fs.azure.AzureNativeFileSystemStore;
 import org.apache.hadoop.fs.azure.NativeAzureFileSystem;
 import org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation;
-import org.apache.hadoop.fs.azurebfs.constants.FileSystemUriSchemes;
-import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import org.apache.hadoop.fs.azurebfs.constants.*;
 
 Review comment:
   nit: revert to previous


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: 
Add Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#discussion_r381986150
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsUriQueryBuilder.java
 ##
 @@ -59,6 +64,20 @@ public String toString() {
 throw new IllegalArgumentException("Query string param is not 
encode-able: " + entry.getKey() + "=" + entry.getValue());
   }
 }
+// append SAS Token
+if (sasToken != null) {
+  sasToken =
+  sasToken.startsWith(AbfsHttpConstants.QUESTION_MARK) ?
 
 Review comment:
   style nit, preferred layout is
   ```
predicate
  ? res1;
  : res2;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16864) ABFS: Test code with Delegation SAS generation logic

2020-02-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040967#comment-17040967
 ] 

Steve Loughran commented on HADOOP-16864:
-

Container SAS or DSAS or directory SAS will be handled alike in ABFS driver. 
Test for HADOOP-16730 includes sample reference for a SASTokenProvider for 
container SAS. Resolving this as duplicate of HADOOP-16730.

> ABFS: Test code with Delegation SAS generation logic
> 
>
> Key: HADOOP-16864
> URL: https://issues.apache.org/jira/browse/HADOOP-16864
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>
> Add sample delegation SAS token generation code in test framework for 
> reference for any authorizer adopters of SAS authentication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16864) ABFS: Test code with Delegation SAS generation logic

2020-02-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16864:

Release Note:   (was: Container SAS or DSAS or directory SAS will be 
handled alike in ABFS driver. Test for HADOOP-16730 includes sample reference 
for a SASTokenProvider for container SAS. Resolving this as duplicate of 
HADOOP-16730.)

> ABFS: Test code with Delegation SAS generation logic
> 
>
> Key: HADOOP-16864
> URL: https://issues.apache.org/jira/browse/HADOOP-16864
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>
> Add sample delegation SAS token generation code in test framework for 
> reference for any authorizer adopters of SAS authentication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2020-02-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040966#comment-17040966
 ] 

Steve Loughran commented on HADOOP-16647:
-

given its a breaker for 1.1.1, and compatible with the old releases, backport 
is fine

if you are doing it though, there are some changes for azuredatalake and abfs 
which need to go in too related to moving off wildfly-1.0.4 and on to 1.0.7. 
Without that, you get to see NPEs

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface

2020-02-20 Thread GitBox
hadoop-yetus removed a comment on issue #1842: HADOOP-16730 : ABFS: Add 
Authorizer Interface
URL: https://github.com/apache/hadoop/pull/1842#issuecomment-588572835
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  9s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 36 new + 8 unchanged - 1 fixed = 44 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 22s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  hadoop-azure in the patch passed.  
|
   | -1 :x: |  asflicense  |   0m 32s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  63m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 47f03bdcfcbe 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 321 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2020-02-20 Thread Sourabh Sarvotham Parkala (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040939#comment-17040939
 ] 

Sourabh Sarvotham Parkala commented on HADOOP-16206:


[~weichiu], thank you for the reply. Could you please let me know if the .patch 
attached in the BLI, is the full fix for log4j migration?

Also, is there any release date planned for 3.3.0? 

Thanks

Sourabh

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-20 Thread GitBox
hadoop-yetus commented on issue #1823: HADOOP-16794 S3 Encryption keys not 
propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#issuecomment-588996092
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   7m  4s |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-aws: The 
patch generated 2 new + 17 unchanged - 3 fixed = 19 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 42s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  49m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1823 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 773c98ea0241 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec75071 |
   | Default Java | 1.8.0_242 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/artifact/out/branch-mvninstall-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/testReport/ |
   | Max. process+thread count | 455 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15273) distcp can't handle remote stores with different checksum algorithms

2020-02-20 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reassigned HADOOP-15273:
--

Assignee: Stephen O'Donnell  (was: Steve Loughran)

> distcp can't handle remote stores with different checksum algorithms
> 
>
> Key: HADOOP-15273
> URL: https://issues.apache.org/jira/browse/HADOOP-15273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Stephen O'Donnell
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15273-001.patch, HADOOP-15273-002.patch, 
> HADOOP-15273-003.patch
>
>
> When using distcp without {{-skipcrcchecks}} . If there's a checksum mismatch 
> between src and dest store types (e.g hdfs to s3), then the error message 
> will talk about blocksize, even when its the underlying checksum protocol 
> itself which is the cause for failure
> bq. Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
> update:  the CRC check takes always place on a distcp upload before the file 
> is renamed into place. *and you can't disable it then*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation

2020-02-20 Thread GitBox
steveloughran commented on issue #1820: HADOOP-16830. Add public IOStatistics 
API + S3A implementation
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-588845853
 
 
   ```
   WARNING] Tests run: 14, Failures: 0, Errors: 0, Skipped: 14, Time elapsed: 
17.413 s - in org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A
   [INFO] Running 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
   [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
5.958 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
   [ERROR] 
testStatistics(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics)
  Time elapsed: 2.603 s  <<< FAILURE!
   java.lang.AssertionError: expected:<512> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics.verifyWrittenBytes(ITestS3AFileContextStatistics.java:68)
at 
org.apache.hadoop.fs.FCStatisticsBaseTest.testStatistics(FCStatisticsBaseTest.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation

2020-02-20 Thread GitBox
hadoop-yetus removed a comment on issue #1820: HADOOP-16830. Add public 
IOStatistics API + S3A implementation
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-588473900
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
12 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 19s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |  15m 37s |  root in the patch failed.  |
   | -1 :x: |  javac  |  15m 37s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 96 new 
+ 95 unchanged - 19 fixed = 191 total (was 114)  |
   | -1 :x: |  mvnsite  |   0m 40s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   1m  4s |  hadoop-common-project_hadoop-common 
generated 87 new + 101 unchanged - 0 fixed = 188 total (was 101)  |
   | -1 :x: |  findbugs  |   2m 16s |  hadoop-common-project/hadoop-common 
generated 29 new + 0 unchanged - 0 fixed = 29 total (was 0)  |
   | -1 :x: |  findbugs  |   0m 38s |  hadoop-aws in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 27s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   0m 39s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 119m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
42] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
45] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
48] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
51] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
54] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
57] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
60] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
63] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
66] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
69] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
72] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
75] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
81] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
78] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
84] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
87] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
90] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
93] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
96] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
99] |
   |  |  Unread public/protected field:At FilesystemStatisticNames.java:[line 
102] |
   |  |  Unread public/protected field:At