[jira] [Commented] (HADOOP-13119) Add ability to secure log servlet using proxy users

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394076#comment-16394076
 ] 

Hudson commented on HADOOP-13119:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13810 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13810/])
Revert "HADOOP-13119. Add ability to secure log servlet using proxy (wangda: 
rev fa6a8b78d481d3b4d355e1bf078f30dd5e09850d)
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationWithProxyUserFilter.java
* (delete) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpengo.java
* (delete) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationWithProxyUserFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java


> Add ability to secure log servlet using proxy users
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: security
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394075#comment-16394075
 ] 

Hudson commented on HADOOP-14077:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13810 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13810/])
Revert "HADOOP-14077. Add ability to access jmx via proxy.  Contributed 
(wangda: rev 3a8dade9b1bf01cf75fc68cecb351c23302cdee5)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpengo.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationWithProxyUserFilter.java


> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch, 
> HADOOP-14077.003.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394027#comment-16394027
 ] 

Xiao Chen commented on HADOOP-15234:


Thanks for revving [~zhenyi].
Patch 4 LGTM, pending 1 nit: {{initialized,please}} missed a space after the 
coma.

Will wait for a couple days to see if Rushabh and Xiaoyu have any comments.

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread fang zhenyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393981#comment-16393981
 ] 

fang zhenyi commented on HADOOP-15305:
--

Thanks [~ajisakaa]  for reporting this.I will attach a patch later.

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi reassigned HADOOP-15305:


Assignee: fang zhenyi

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15305:
---
Target Version/s: 3.2.0

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393920#comment-16393920
 ] 

Akira Ajisaka edited comment on HADOOP-15305 at 3/10/18 1:13 AM:
-

FYI: For Charsets, {{StandardCharsets.UTF-8}} is better than 
{{Charset.forName("UTF-8")}} because we don't need to write a code to catch and 
ignore UnsupportedCharsetException.


was (Author: ajisakaa):
FYI: For Charsets, {{StandardCharsets.UTF-8}} is better than 
{{Charset.forName("UTF-8")}} because we don't need to ignore 
UnsupportedCharsetException.

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393920#comment-16393920
 ] 

Akira Ajisaka commented on HADOOP-15305:


FYI: For Charsets, {{StandardCharsets.UTF-8}} is better than 
{{Charset.forName("UTF-8")}} because we don't need to ignore 
UnsupportedCharsetException.

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15305:
---
Priority: Minor  (was: Major)

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15305:
---
Labels: newbie  (was: )

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15305:
---
Description: FileUtils.writeStringToFile(File, String) relies on default 
charset and should be replaced with FileUtils.writeStringToFile(File, String, 
Charset).

> Replace FileUtils.writeStringToFile(File, String) with (File, String, 
> Charset) to fix deprecation warnings
> --
>
> Key: HADOOP-15305
> URL: https://issues.apache.org/jira/browse/HADOOP-15305
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> FileUtils.writeStringToFile(File, String) relies on default charset and 
> should be replaced with FileUtils.writeStringToFile(File, String, Charset).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings

2018-03-09 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15305:
--

 Summary: Replace FileUtils.writeStringToFile(File, String) with 
(File, String, Charset) to fix deprecation warnings
 Key: HADOOP-15305
 URL: https://issues.apache.org/jira/browse/HADOOP-15305
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-11423) [Umbrella] Support Java 10 in Hadoop

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-11423:


Reopen this since JDK 10 will be available soon.

> [Umbrella] Support Java 10 in Hadoop
> 
>
> Key: HADOOP-11423
> URL: https://issues.apache.org/jira/browse/HADOOP-11423
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: sneaky
>Priority: Minor
>
> Java 10 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works with Java 10 is important for the Apache community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15304:
---
Summary: [JDK10] Migrate from com.sun.tools.doclets to the replacement  
(was: Migrate from com.sun.tools.doclets to the replacement)

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15304) Migrate from com.sun.tools.doclets to the replacement

2018-03-09 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15304:
--

 Summary: Migrate from com.sun.tools.doclets to the replacement
 Key: HADOOP-15304
 URL: https://issues.apache.org/jira/browse/HADOOP-15304
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


com.sun.tools.doclets.* packages were removed in Java 10. 
[https://bugs.openjdk.java.net/browse/JDK-8177511]

This causes hadoop-annotations module to fail.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-annotations: Compilation failure: Compilation failure:
[ERROR] 
/Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
 cannot find symbol
[ERROR] symbol:   method 
validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
[ERROR] location: class com.sun.tools.doclets.standard.Standard
[ERROR] 
/Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
 cannot find symbol
[ERROR] symbol:   method 
validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
[ERROR] location: class com.sun.tools.doclets.standard.Standard
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15293) TestLogLevel fails on Java 9

2018-03-09 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393876#comment-16393876
 ] 

Takanobu Asanuma commented on HADOOP-15293:
---

Thanks for the review and the commit, [~ajisakaa] and [~ste...@apache.org]!

> TestLogLevel fails on Java 9
> 
>
> Key: HADOOP-15293
> URL: https://issues.apache.org/jira/browse/HADOOP-15293
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.log.TestLogLevel
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 
> s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel
> [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel)  
> Time elapsed: 1.179 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Unrecognized SSL message' but got unexpected exception: 
> javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>   at 
> java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393591#comment-16393591
 ] 

genericqa commented on HADOOP-15209:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 36s{color} | {color:orange} root: The patch generated 3 new + 286 unchanged 
- 37 fixed = 289 total (was 323) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
51s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
30s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913805/HADOOP-15209-007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |

[jira] [Commented] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393534#comment-16393534
 ] 

Devaraj Das commented on HADOOP-15297:
--

+1 (but pls fix the genericqa reported warnings)

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch, 
> HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393507#comment-16393507
 ] 

Steve Loughran commented on HADOOP-15302:
-

Based on similar self-loading stuff, what happens when there's a failure to 
load? What diagnostics gets printed, and can it be tied back to a specific 
service confg? 

> Enable DataNode/NameNode service plugins with Service Provider interface
> 
>
> Key: HADOOP-15302
> URL: https://issues.apache.org/jira/browse/HADOOP-15302
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15302.001.patch
>
>
> HADOOP-5257 introduced ServicePlugin capabilities for NameNode/DataNode. As 
> of now they could be activated by configuration values. 
> I propose to activate plugins with Service Provider Interface. In case of a 
> special service file is added a jar it would be enough to add the plugin to 
> the classpath. It would help to add optional components to NameNode/DataNode 
> with settings the classpath.
> This is the same api which could be used in java 9 to consume defined 
> services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14707) AbstractContractDistCpTest to test attr preservation with -p, verify blobstores downgrade

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393504#comment-16393504
 ] 

Steve Loughran commented on HADOOP-14707:
-

No, its for flexibility of capabilities across versions, just as we do for 
StreamCapabilities. 

> AbstractContractDistCpTest to test attr preservation with -p, verify 
> blobstores downgrade
> -
>
> Key: HADOOP-14707
> URL: https://issues.apache.org/jira/browse/HADOOP-14707
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14707-001.patch, HADOOP-14707-002.patch, 
> HADOOP-14707-003.patch
>
>
> It *may* be that trying to use {{distcp -p}} with S3a triggers a stack trace 
> {code}
> java.lang.UnsupportedOperationException: S3AFileSystem doesn't support 
> getXAttrs 
> at org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559) 
> at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
>  
> {code}
> Add a test to {{AbstractContractDistCpTest}} to verify that this is handled 
> better. What is "handle better" here? Either ignore the option or fail with 
> "don't do that" text



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15282) HADOOP-15235 broke TestHttpFSServerWebServer

2018-03-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393470#comment-16393470
 ] 

Xiao Chen commented on HADOOP-15282:


Thanks Robert for fixing it and Akira for beating me on review! :)

> HADOOP-15235 broke TestHttpFSServerWebServer
> 
>
> Key: HADOOP-15282
> URL: https://issues.apache.org/jira/browse/HADOOP-15282
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15282.001.patch
>
>
> As [~xiaochen] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-15235?focusedCommentId=16375379=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16375379]
>  on HADOOP-15235, it broke {{TestHttpFSServerWebServer}}:
> {noformat}
> 2018-02-23 23:13:29,791 WARN  ServletHandler - /webhdfs/v1/
> java.lang.IllegalArgumentException: Empty key
>   at javax.crypto.spec.SecretKeySpec.(SecretKeySpec.java:96)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
>   at 
> org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:745)
> java.lang.AssertionError: 
> Expected :500
> Actual   :200
>  
> {noformat}
> This only affects trunk because {{TestHttpFSServerWebServer}} doesn't exist 
> in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393386#comment-16393386
 ] 

genericqa commented on HADOOP-15302:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 244 unchanged - 0 fixed = 247 total (was 244) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  2m  
8s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15302 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913791/HADOOP-15302.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a3993a870689 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3f7bd46 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14291/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Comment Edited] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393356#comment-16393356
 ] 

Steve Loughran edited comment on HADOOP-15209 at 3/9/18 6:43 PM:
-

Testing: local, s3a, adl, wasb IT tests, command line updates including to an 
s3a endpoint running inconsistently. 

oh, and I tried FTP. Doesn't work as a dest; never has.


was (Author: ste...@apache.org):
Testing: local, s3a, adl, wasb IT tests, command line updates including to an 
s3a endpoint running inconsistently. 

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393356#comment-16393356
 ] 

Steve Loughran commented on HADOOP-15209:
-

Testing: local, s3a, adl, wasb IT tests, command line updates including to an 
s3a endpoint running inconsistently. 

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15293) TestLogLevel fails on Java 9

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393354#comment-16393354
 ] 

Hudson commented on HADOOP-15293:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13803 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13803/])
HADOOP-15293. TestLogLevel fails on Java 9 (aajisaka: rev 
99ab511cbac570bea9d31a55898b95590a8e3159)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java


> TestLogLevel fails on Java 9
> 
>
> Key: HADOOP-15293
> URL: https://issues.apache.org/jira/browse/HADOOP-15293
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.log.TestLogLevel
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 
> s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel
> [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel)  
> Time elapsed: 1.179 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Unrecognized SSL message' but got unexpected exception: 
> javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>   at 
> java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Attachment: HADOOP-15209-007.patch

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15209:

Status: Patch Available  (was: Open)

HADOOP-15209 patch 007

* don't worry about false from deletes. The only FS which *may* return it for 
bigger problems is ftp, and ftp doesn't work as a dest for distcp
* just add it to a counter & log. At it means is that a delete of a parent dir 
has cut it, but that dir is no longer in the cache.
* print cache size at the end of the run
* tests to verify using counters that there's no files copied over to the 
remote store. Tested against: s3a, wasb, adl.
* tried to add counters to the delete operations, but they don't get picked up 
in the job results...obviously the committer is special that way.
* and the -i ignore flag also ignores delete() failure, which makes it 
consistent.


> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch, HADOOP-15209-007.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15293) TestLogLevel fails on Java 9

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15293:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~tasanuma0829] and [~ste...@apache.org]!

> TestLogLevel fails on Java 9
> 
>
> Key: HADOOP-15293
> URL: https://issues.apache.org/jira/browse/HADOOP-15293
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
> Environment: Applied HADOOP-12760 and HDFS-11610
>Reporter: Akira Ajisaka
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.log.TestLogLevel
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 
> s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel
> [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel)  
> Time elapsed: 1.179 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Unrecognized SSL message' but got unexpected exception: 
> javax.net.ssl.SSLException: Unsupported or unrecognized SSL message
>   at 
> java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10007) distcp / mv is not working on ftp

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393303#comment-16393303
 ] 

Steve Loughran commented on HADOOP-10007:
-

latest stack
{code}2018-03-09 17:59:08,752 [LocalJobRunner Map Task Executor #0] ERROR 
util.RetriableCommand (RetriableCommand.java:execute(89)) - Failure in 
Retriable command: Copying 
file:/Users/stevel/hadoop-trunk/hadoop-common-project/hadoop-auth/target/classes3/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider$1.class
 to 
ftp://ftpserver/home/scratch/auth/target/classes3/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider$1.class
java.io.IOException: Cannot rename source: 
ftp://ftpserver/home/scratch/auth/.distcp.tmp.attempt_local1253814949_0001_m_00_0
 to 
ftp://ftpserver/home/scratch/auth/target/classes3/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider$1.class
 -only same directory renames are supported
at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:674)
at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:613)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.promoteTmpToTarget(RetriableFileCopyCommand.java:249)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:140)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:256)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:217)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-03-09 17:59:12,700 [LocalJobRunner Map Task Exec
{code}

> distcp / mv is not working on ftp
> -
>
> Key: HADOOP-10007
> URL: https://issues.apache.org/jira/browse/HADOOP-10007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Ubuntu 12.04.2 LTS
> Hadoop 2.0.0-cdh4.2.1
> Subversion 
> file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.1-Packaging-Hadoop-2013-04-22_09-50-19/hadoop-2.0.0+960-1.cdh4.2.1.p0.9~precise/src/hadoop-common-project/hadoop-common
>  -r 144bd548d481c2774fab2bec2ac2645d190f705b
> Compiled by jenkins on Mon Apr 22 10:26:30 PDT 2013
> From source with checksum aef88defdddfb22327a107fbd7063395
>Reporter: Fabian Zimmermann
>Priority: Major
>
> i'm just trying to backup some files to our ftp-server.
> hadoop distcp hdfs:///data/ ftp://user:pass@server/data/
> returns after some minutes with:
> Task TASKID="task_201308231529_97700_m_02" TASK_TYPE="MAP" 
> TASK_STATUS="FAILED" FINISH_TIME="1380217916479" 
> ERROR="java\.io\.IOException: Cannot rename parent(source): 
> ftp://x:x@backup2/data/, parent(destination):  ftp://x:x@backup2/data/
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:557)
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:522)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:154)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:172)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.commitTask(FileOutputCommitter\.java:132)
>   at 
> org\.apache\.hadoop\.mapred\.OutputCommitter\.commitTask(OutputCommitter\.java:221)
>   at org\.apache\.hadoop\.mapred\.Task\.commit(Task\.java:1000)
>   at org\.apache\.hadoop\.mapred\.Task\.done(Task\.java:870)
>   at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:329)
>   at org\.apache\.hadoop\.mapred\.Child$4\.run" TASK_ATTEMPT_ID="" .
> I googled a bit and added
> fs.ftp.host = backup2
> fs.ftp.user.backup2 = user
> fs.ftp.password.backup2 = password
> to core-site.xml, then I was able to execute:
> hadoop fs -ls ftp:///data/
> hadoop fs -rm ftp:///data/test.file
> but as soon as I try
> hadoop 

[jira] [Commented] (HADOOP-10007) distcp / mv is not working on ftp

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393299#comment-16393299
 ] 

Steve Loughran commented on HADOOP-10007:
-

Reopening as I can reproduce this locally. The problem is that you can't rename 
from a temp dir to the final destination, and of course, distcp copies to a 
temp dir and then renames in. 

We'll need HADOOP-15281 to fix this, which is also needed for perf on s3 & 
other expensive-to-rename stores

> distcp / mv is not working on ftp
> -
>
> Key: HADOOP-10007
> URL: https://issues.apache.org/jira/browse/HADOOP-10007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Ubuntu 12.04.2 LTS
> Hadoop 2.0.0-cdh4.2.1
> Subversion 
> file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.1-Packaging-Hadoop-2013-04-22_09-50-19/hadoop-2.0.0+960-1.cdh4.2.1.p0.9~precise/src/hadoop-common-project/hadoop-common
>  -r 144bd548d481c2774fab2bec2ac2645d190f705b
> Compiled by jenkins on Mon Apr 22 10:26:30 PDT 2013
> From source with checksum aef88defdddfb22327a107fbd7063395
>Reporter: Fabian Zimmermann
>Priority: Major
>
> i'm just trying to backup some files to our ftp-server.
> hadoop distcp hdfs:///data/ ftp://user:pass@server/data/
> returns after some minutes with:
> Task TASKID="task_201308231529_97700_m_02" TASK_TYPE="MAP" 
> TASK_STATUS="FAILED" FINISH_TIME="1380217916479" 
> ERROR="java\.io\.IOException: Cannot rename parent(source): 
> ftp://x:x@backup2/data/, parent(destination):  ftp://x:x@backup2/data/
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:557)
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:522)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:154)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:172)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.commitTask(FileOutputCommitter\.java:132)
>   at 
> org\.apache\.hadoop\.mapred\.OutputCommitter\.commitTask(OutputCommitter\.java:221)
>   at org\.apache\.hadoop\.mapred\.Task\.commit(Task\.java:1000)
>   at org\.apache\.hadoop\.mapred\.Task\.done(Task\.java:870)
>   at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:329)
>   at org\.apache\.hadoop\.mapred\.Child$4\.run" TASK_ATTEMPT_ID="" .
> I googled a bit and added
> fs.ftp.host = backup2
> fs.ftp.user.backup2 = user
> fs.ftp.password.backup2 = password
> to core-site.xml, then I was able to execute:
> hadoop fs -ls ftp:///data/
> hadoop fs -rm ftp:///data/test.file
> but as soon as I try
> hadoop fs -mv file:///data/test.file ftp:///data/test2.file
> mv: `ftp:///data/test.file': Input/output error
> I enabled debug-logging in our ftp-server and got:
> Sep 27 15:24:33 backup2 ftpd[38241]: command: LIST /data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 150
> Sep 27 15:24:33 backup2 ftpd[38241]: Opening BINARY mode data connection for 
> '/bin/ls'.
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 226
> Sep 27 15:24:33 backup2 ftpd[38241]: Transfer complete.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: CWD ftp:/data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> Sep 27 15:24:33 backup2 ftpd[38241]: ftp:/data: No such file or directory.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: RNFR test.file
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> looks like the generation of "CWD" is buggy, hadoop tries to cd into 
> "ftp:/data", but should use "/data"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-10007) distcp / mv is not working on ftp

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-10007:
-

> distcp / mv is not working on ftp
> -
>
> Key: HADOOP-10007
> URL: https://issues.apache.org/jira/browse/HADOOP-10007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Ubuntu 12.04.2 LTS
> Hadoop 2.0.0-cdh4.2.1
> Subversion 
> file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.1-Packaging-Hadoop-2013-04-22_09-50-19/hadoop-2.0.0+960-1.cdh4.2.1.p0.9~precise/src/hadoop-common-project/hadoop-common
>  -r 144bd548d481c2774fab2bec2ac2645d190f705b
> Compiled by jenkins on Mon Apr 22 10:26:30 PDT 2013
> From source with checksum aef88defdddfb22327a107fbd7063395
>Reporter: Fabian Zimmermann
>Priority: Major
>
> i'm just trying to backup some files to our ftp-server.
> hadoop distcp hdfs:///data/ ftp://user:pass@server/data/
> returns after some minutes with:
> Task TASKID="task_201308231529_97700_m_02" TASK_TYPE="MAP" 
> TASK_STATUS="FAILED" FINISH_TIME="1380217916479" 
> ERROR="java\.io\.IOException: Cannot rename parent(source): 
> ftp://x:x@backup2/data/, parent(destination):  ftp://x:x@backup2/data/
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:557)
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:522)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:154)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:172)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.commitTask(FileOutputCommitter\.java:132)
>   at 
> org\.apache\.hadoop\.mapred\.OutputCommitter\.commitTask(OutputCommitter\.java:221)
>   at org\.apache\.hadoop\.mapred\.Task\.commit(Task\.java:1000)
>   at org\.apache\.hadoop\.mapred\.Task\.done(Task\.java:870)
>   at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:329)
>   at org\.apache\.hadoop\.mapred\.Child$4\.run" TASK_ATTEMPT_ID="" .
> I googled a bit and added
> fs.ftp.host = backup2
> fs.ftp.user.backup2 = user
> fs.ftp.password.backup2 = password
> to core-site.xml, then I was able to execute:
> hadoop fs -ls ftp:///data/
> hadoop fs -rm ftp:///data/test.file
> but as soon as I try
> hadoop fs -mv file:///data/test.file ftp:///data/test2.file
> mv: `ftp:///data/test.file': Input/output error
> I enabled debug-logging in our ftp-server and got:
> Sep 27 15:24:33 backup2 ftpd[38241]: command: LIST /data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 150
> Sep 27 15:24:33 backup2 ftpd[38241]: Opening BINARY mode data connection for 
> '/bin/ls'.
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 226
> Sep 27 15:24:33 backup2 ftpd[38241]: Transfer complete.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: CWD ftp:/data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> Sep 27 15:24:33 backup2 ftpd[38241]: ftp:/data: No such file or directory.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: RNFR test.file
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> looks like the generation of "CWD" is buggy, hadoop tries to cd into 
> "ftp:/data", but should use "/data"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393284#comment-16393284
 ] 

genericqa commented on HADOOP-15297:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913793/HADOOP-15297-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux b3d712a486e2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3f7bd46 |
| maven | version: Apache Maven 

[jira] [Created] (HADOOP-15303) make s3a read fault injection configurable including "off"

2018-03-09 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15303:
---

 Summary: make s3a read fault injection configurable including "off"
 Key: HADOOP-15303
 URL: https://issues.apache.org/jira/browse/HADOOP-15303
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.1.0
Reporter: Steve Loughran


When trying to test distcp with large files and inconsistent destination (P 
fail = 0.4),  read() failures on the D/L can overload the retry logic in 
S3AInput, even though all I want to see is how listings cope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15303) make s3a read fault injection configurable including "off"

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393175#comment-16393175
 ] 

Steve Loughran commented on HADOOP-15303:
-

This is happening under {{ITestS3AContractDistCp.largeFilesToRemote}}; managed 
to get the test falure on read, because with enough reads in a test and the 
probability of read failure high, eventually the failure can get escalated.

I'd like to be able to turn the read stuff off, or set the limit to something 
low like "1"

> make s3a read fault injection configurable including "off"
> --
>
> Key: HADOOP-15303
> URL: https://issues.apache.org/jira/browse/HADOOP-15303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> When trying to test distcp with large files and inconsistent destination (P 
> fail = 0.4),  read() failures on the D/L can overload the retry logic in 
> S3AInput, even though all I want to see is how listings cope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15303) make s3a read fault injection configurable including "off"

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15303:

Priority: Minor  (was: Major)

> make s3a read fault injection configurable including "off"
> --
>
> Key: HADOOP-15303
> URL: https://issues.apache.org/jira/browse/HADOOP-15303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Minor
>
> When trying to test distcp with large files and inconsistent destination (P 
> fail = 0.4),  read() failures on the D/L can overload the retry logic in 
> S3AInput, even though all I want to see is how listings cope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Status: Patch Available  (was: Open)

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch, 
> HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Status: Open  (was: Patch Available)

docker/yetus unhappy, reattach and resubmit

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch, 
> HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Attachment: HADOOP-15297-002.patch

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch, 
> HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2018-03-09 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393042#comment-16393042
 ] 

Elek, Marton commented on HADOOP-14898:
---

I opened the INFRA ticket: INFRA-16163

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official source-code branches (branch-2, trunk, etc.) or we can create 
> separated branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0)
> For the first approach it's easier to find the docker images, but it's less 
> flexible. For example if we had a Dockerfile for on the source code it should 
> be used for every release (for example the Docker file from the tag 
> release-3.0.0 should be used for the 3.0 hadoop docker image). In that case 
> the release process is much more harder: in case of a Dockerfile error (which 
> could be test on dockerhub only after the taging), a new release should be 
> added after fixing the Dockerfile.
> Another problem is that 

[jira] [Updated] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface

2018-03-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15302:
--
Status: Patch Available  (was: Open)

> Enable DataNode/NameNode service plugins with Service Provider interface
> 
>
> Key: HADOOP-15302
> URL: https://issues.apache.org/jira/browse/HADOOP-15302
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15302.001.patch
>
>
> HADOOP-5257 introduced ServicePlugin capabilities for NameNode/DataNode. As 
> of now they could be activated by configuration values. 
> I propose to activate plugins with Service Provider Interface. In case of a 
> special service file is added a jar it would be enough to add the plugin to 
> the classpath. It would help to add optional components to NameNode/DataNode 
> with settings the classpath.
> This is the same api which could be used in java 9 to consume defined 
> services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface

2018-03-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15302:
--
Attachment: HADOOP-15302.001.patch

> Enable DataNode/NameNode service plugins with Service Provider interface
> 
>
> Key: HADOOP-15302
> URL: https://issues.apache.org/jira/browse/HADOOP-15302
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15302.001.patch
>
>
> HADOOP-5257 introduced ServicePlugin capabilities for NameNode/DataNode. As 
> of now they could be activated by configuration values. 
> I propose to activate plugins with Service Provider Interface. In case of a 
> special service file is added a jar it would be enough to add the plugin to 
> the classpath. It would help to add optional components to NameNode/DataNode 
> with settings the classpath.
> This is the same api which could be used in java 9 to consume defined 
> services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface

2018-03-09 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-15302:
-

 Summary: Enable DataNode/NameNode service plugins with Service 
Provider interface
 Key: HADOOP-15302
 URL: https://issues.apache.org/jira/browse/HADOOP-15302
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


HADOOP-5257 introduced ServicePlugin capabilities for NameNode/DataNode. As of 
now they could be activated by configuration values. 

I propose to activate plugins with Service Provider Interface. In case of a 
special service file is added a jar it would be enough to add the plugin to the 
classpath. It would help to add optional components to NameNode/DataNode with 
settings the classpath.

This is the same api which could be used in java 9 to consume defined services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14707) AbstractContractDistCpTest to test attr preservation with -p, verify blobstores downgrade

2018-03-09 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392948#comment-16392948
 ] 

Ewan Higgs commented on HADOOP-14707:
-

I take it we use String instead of an enum of capabilities because we don't 
want to the capabilities to be part of the protobuf layer - meaning a costly 
rollout process?

> AbstractContractDistCpTest to test attr preservation with -p, verify 
> blobstores downgrade
> -
>
> Key: HADOOP-14707
> URL: https://issues.apache.org/jira/browse/HADOOP-14707
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14707-001.patch, HADOOP-14707-002.patch, 
> HADOOP-14707-003.patch
>
>
> It *may* be that trying to use {{distcp -p}} with S3a triggers a stack trace 
> {code}
> java.lang.UnsupportedOperationException: S3AFileSystem doesn't support 
> getXAttrs 
> at org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559) 
> at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
>  
> {code}
> Add a test to {{AbstractContractDistCpTest}} to verify that this is handled 
> better. What is "handle better" here? Either ignore the option or fail with 
> "don't do that" text



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392899#comment-16392899
 ] 

genericqa commented on HADOOP-15234:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
5s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15234 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913776/HADOOP-15234.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c075e4adbb0b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3f7bd46 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14289/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14289/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE when initializing KMSWebApp
> 

[jira] [Updated] (HADOOP-15301) TestDynamoDBMetadataStore test case fails on PPC

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15301:

Issue Type: Sub-task  (was: Test)
Parent: HADOOP-15220

> TestDynamoDBMetadataStore test case fails on PPC
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at 
> 

[jira] [Updated] (HADOOP-15301) TestDynamoDBMetadataStore test case fails on PPC

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15301:

Component/s: fs/s3

> TestDynamoDBMetadataStore test case fails on PPC
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at 
> com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
>  at 
> 

[jira] [Updated] (HADOOP-15301) TestDynamoDBMetadataStore test case fails on PPC

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15301:

Affects Version/s: 3.2.0

> TestDynamoDBMetadataStore test case fails on PPC
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at 
> com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
>  

[jira] [Updated] (HADOOP-15301) TestDynamoDBMetadataStore test case fails on PPC

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15301:

Summary: TestDynamoDBMetadataStore test case fails on PPC  (was: 
TestDynamoDBMetadataStore test case fails in Hadoop.)

> TestDynamoDBMetadataStore test case fails on PPC
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at 
> 

[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14918:

Attachment: HADOOP-14918-003.patch

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15301) TestDynamoDBMetadataStore test case fails in Hadoop.

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392870#comment-16392870
 ] 

Steve Loughran commented on HADOOP-15301:
-

This is the local one, isn't it? I'd actually like to cut that one out 
completely (HADOOP-14918)...we just need to move directly onto DDB 
(HADOOP-14068)  that and you wouldn't have to wory about it. That should be 
easier than the sqlite fix.

Fancy taking this problem on?

> TestDynamoDBMetadataStore test case fails in Hadoop.
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Test
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> 

[jira] [Commented] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392866#comment-16392866
 ] 

genericqa commented on HADOOP-15297:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
7s{color} | {color:red} Docker failed to build yetus/hadoop:tp-28671. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913782/HADOOP-15297-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14290/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Status: Patch Available  (was: Open)

patch 002; rm spurious stub test case

test: s3 ireland w/ s3guard auth mode

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Attachment: HADOOP-15297-002.patch

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15297:

Status: Open  (was: Patch Available)

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15301) TestDynamoDBMetadataStore test case fails in Hadoop.

2018-03-09 Thread Parita Johari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392821#comment-16392821
 ] 

Parita Johari commented on HADOOP-15301:


* Requires sqlite4java lib for ppc architecture.
 * Ported the sqlite4java library to ppc and raise the issue for it in 
bitbucket.
 * Bitbucket link- 
[https://bitbucket.org/almworks/sqlite4java/issues/85/require-so-file-for-ppc-architecture]

> TestDynamoDBMetadataStore test case fails in Hadoop.
> 
>
> Key: HADOOP-15301
> URL: https://issues.apache.org/jira/browse/HADOOP-15301
> Project: Hadoop Common
>  Issue Type: Test
> Environment: $ uname -a
> Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 
> 2016 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Parita Johari
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
> TestDynamoDBMetadataStore: 
> com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
> processing has failed because of an unknown error, exception or failure. 
> (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
> Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
> failed because of an unknown error, exception or failure. (Service: 
> AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
> 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) 
> at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
>  at 
> org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
> Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 
> The request processing has failed because of an unknown error, exception or 
> failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
> InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>  at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>  at 
> 

[jira] [Created] (HADOOP-15301) TestDynamoDBMetadataStore test case fails in Hadoop.

2018-03-09 Thread Parita Johari (JIRA)
Parita Johari created HADOOP-15301:
--

 Summary: TestDynamoDBMetadataStore test case fails in Hadoop.
 Key: HADOOP-15301
 URL: https://issues.apache.org/jira/browse/HADOOP-15301
 Project: Hadoop Common
  Issue Type: Test
 Environment: $ uname -a
Linux 4bc09ac224a8 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 2016 
ppc64le ppc64le ppc64le GNU/Linux
Reporter: Parita Johari


org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on 
TestDynamoDBMetadataStore: 
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The request 
processing has failed because of an unknown error, exception or failure. 
(Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; 
Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56): The request processing has 
failed because of an unknown error, exception or failure. (Service: 
AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 
239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389) 
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:928)
 at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:291)
 at 
org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:146)
 at 
org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore$DynamoDBMSContract.(TestDynamoDBMetadataStore.java:129)
 at 
org.apache.hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore.setUpBeforeClass(TestDynamoDBMetadataStore.java:102)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160) 
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407) 
Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The 
request processing has failed because of an unknown error, exception or 
failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
InternalFailure; Request ID: 239adb3e-5ae6-4d6c-a0a7-2a256f17ae56) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925)
 at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2901)
 at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1515)
 at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1491)
 at com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137) 
at 

[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread fang zhenyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392811#comment-16392811
 ] 

fang zhenyi commented on HADOOP-15234:
--

Thanks [~xiaochen],[~shahrs87] and [~xyao] for reviewing the patch.I have added 
\{{providerString}} to the exception message in HADOOP-15234.003.patch.

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15234:
-
Status: Patch Available  (was: In Progress)

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15234:
-
Status: In Progress  (was: Patch Available)

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp

2018-03-09 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15234:
-
Attachment: HADOOP-15234.003.patch

> NPE when initializing KMSWebApp
> ---
>
> Key: HADOOP-15234
> URL: https://issues.apache.org/jira/browse/HADOOP-15234
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: fang zhenyi
>Priority: Major
> Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, 
> HADOOP-15234.003.patch
>
>
> During KMS startup, if the {{keyProvider}} is null, it will NPE inside 
> KeyProviderExtension.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43)
>   at 
> org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93)
>   at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170)
> {noformat}
> We're investigating the exact scenario that could lead to this, but the NPE 
> and log around it can be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15281) Distcp to add no-rename copy option

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15281:
---

Assignee: (was: Steve Loughran)
Target Version/s: 3.2.0

> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following swquence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write direct to the dest path. either a conf option 
> or a CLI option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377371#comment-16377371
 ] 

Steve Loughran edited comment on HADOOP-15264 at 3/9/18 11:34 AM:
--

I've got a patch which excludes the JARs; they're only needed for a new 
kinesis-video module

I'd rather do that than upgrade the AWS SDK to a new one, in case there are 
other surprises. We've been using the 1.11.271 for a few weeks, no NPE stack 
traces, no complaints that we are closing streams in an abort() call, etc. 
Happy.

Patch attached, tested against AWS S3 London; one failure in 
ITestS3GuardToolLocal; trying s3 ireland ann everything is rejected at 400. I 
think something is up with my S3 binding today, as I've been seeing failures 
elsewhere. Assume unrelated.


was (Author: ste...@apache.org):
I've got a patch which excludes the JARs; they're only needed 

I'd rather do that than upgrade the AWS SDK to a new one, in case there are 
other surprises. We've been using the 1.11.271 for a few weeks, no NPE stack 
traces, no complaints that we are closing streams in an abort() call, etc. 
Happy.

Patch attached, tested against AWS S3 London; one failure in 
ITestS3GuardToolLocal; trying s3 ireland ann everything is rejected at 400. I 
think something is up with my S3 binding today, as I've been seeing failures 
elsewhere. Assume unrelated.

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13592) Outputs errors and warnings by checkstyle at compile time

2018-03-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-13592.
---
Resolution: Won't Fix

> Outputs errors and warnings by checkstyle at compile time
> -
>
> Key: HADOOP-13592
> URL: https://issues.apache.org/jira/browse/HADOOP-13592
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Priority: Major
> Attachments: HADOOP-13592.001.patch
>
>
> Currently, Apache Hadoop has lots checkstyle errors and warnings, but it's 
> not outputted at compile time. This prevents us from fixing the errors and 
> warnings.
> We should output errors and warnings at compile time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392729#comment-16392729
 ] 

Steve Loughran commented on HADOOP-15209:
-

I cancelled because I want to simplify that retry logic. By removing it :). 
It's too complex. and I don't see what it delivers

I've looked through all the delete() calls and apart from ftp oddness, all 
filesystems only return false on delete() if the dir wasn't there.

So the algorithm I want is
! delete() => log.info() & continue

But also: have the -i for ignoreErrors also work for this phase too; any 
exception from delete

I'm also not seeing that stack trace above when I turn on inconsistent s3a 
listings with the old code. It's complaining about duplicate entries, which 
could be something up with our simulated listings, or something else.

> DistCp to eliminate needless deletion of files under already-deleted 
> directories
> 
>
> Key: HADOOP-15209
> URL: https://issues.apache.org/jira/browse/HADOOP-15209
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, 
> HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, 
> HADOOP-15209-006.patch
>
>
> DistCP issues a delete(file) request even if is underneath an already deleted 
> directory. This generates needless load on filesystems/object stores, and, if 
> the store throttles delete, can dramatically slow down the delete operation.
> If the distcp delete operation can build a history of deleted directories, 
> then it will know when it does not need to issue those deletes.
> Care is needed here to make sure that whatever structure is created does not 
> overload the heap of the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392715#comment-16392715
 ] 

Hudson commented on HADOOP-15277:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13800 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13800/])
HADOOP-15277. Remove .FluentPropertyBeanIntrospector from CLI operation 
(stevel: rev 3f7bd467979042161897a7c91c5b094b83164f75)
* (edit) hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15277-001.patch
>
>
> When hadoop metrics is started, a message about bean introspection appears.
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> When using wasb or s3a,. this message appears in the client logs, because 
> they both start metrics
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15273) distcp can't handle remote stores with different checksum algorithms

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15273:

Fix Version/s: 3.0.2

> distcp can't handle remote stores with different checksum algorithms
> 
>
> Key: HADOOP-15273
> URL: https://issues.apache.org/jira/browse/HADOOP-15273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15273-001.patch, HADOOP-15273-002.patch, 
> HADOOP-15273-003.patch
>
>
> When using distcp without {{-skipcrcchecks}} . If there's a checksum mismatch 
> between src and dest store types (e.g hdfs to s3), then the error message 
> will talk about blocksize, even when its the underlying checksum protocol 
> itself which is the cause for failure
> bq. Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
> update:  the CRC check takes always place on a distcp upload before the file 
> is renamed into place. *and you can't disable it then*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15277:

Priority: Minor  (was: Major)

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15277-001.patch
>
>
> When hadoop metrics is started, a message about bean introspection appears.
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> When using wasb or s3a,. this message appears in the client logs, because 
> they both start metrics
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15277:

   Resolution: Fixed
Fix Version/s: 3.0.2
   3.1.0
   Status: Resolved  (was: Patch Available)

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HADOOP-15277-001.patch
>
>
> When hadoop metrics is started, a message about bean introspection appears.
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> When using wasb or s3a,. this message appears in the client logs, because 
> they both start metrics
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15299) Bump Hadoop's Jackson 2 dependency 2.9.x

2018-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392669#comment-16392669
 ] 

Steve Loughran commented on HADOOP-15299:
-

I agree on the need for this, but also fear it. 

I think we need to make sure that all client dependencies can be picked up 
shading, where the cloud storage JARs come next. There's also 1+ JAR used by 
spark, plus what hive wants.

If we can do this, we can then do protobuf

> Bump Hadoop's Jackson 2 dependency 2.9.x
> 
>
> Key: HADOOP-15299
> URL: https://issues.apache.org/jira/browse/HADOOP-15299
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> There are a few new CVEs open against Jackson 2.7.x. It doesn't (necessarily) 
> mean Hadoop is vulnerable to the attack - I don't know that it is, but fixes 
> were released for Jackson 2.8.x and 2.9.x but not 2.7.x (which we're on). We 
> shouldn't be on an unmaintained line, regardless. HBase is already on 2.9.x, 
> we have a shaded client now, the API changes are relatively minor and so far 
> in my testing I haven't seen any problems. I think many of our usual reasons 
> to hesitate upgrading this dependency don't apply.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392600#comment-16392600
 ] 

genericqa commented on HADOOP-14445:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 285 unchanged - 7 fixed = 286 total (was 292) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
16s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-14445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913730/HADOOP-14445.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 794dac7f974b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 113f401 |
| maven | version: Apache Maven 3.3.9 |
|