[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321830#comment-16321830
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160883520
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -43,6 +49,9 @@
   public static final int CHECKSUM_CRC32C  = 2;
   public static final int CHECKSUM_DEFAULT = 3; 
   public static final int CHECKSUM_MIXED   = 4;
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataChecksum.class);
+  private static volatile boolean useJava9Crc32C = 
Shell.isJavaVersionAtLeast(9);
--- End diff --

@aajisaka The initial design was to have constructor method handle as 
static final in the main class and to not use the flag at all.
There was a concern that Java may not work as it must work: class or its 
constructor available by spec won't be found by standard means or there will be 
an error during invocation. I'd assume that in this case it would be fine to 
fail globally. Also some code around this stuff may be considered possible to 
fail.
These scenarios are described by try-catch blocks. In case something bad 
has happened on 9+ we switch the flag to use old internal plain java 
implementation and log the error.
The API should be thread safe/correct so we can imagine a race where 
multiple threads try the initialization, get errors and put down the flag.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.010.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15165) Fix typo in YARN documentation

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321771#comment-16321771
 ] 

genericqa commented on HADOOP-15165:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15165 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905613/HADOOP-15165.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 66885dc29dee 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13951/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in YARN documentation
> --
>
> Key: HADOOP-15165
> URL: https://issues.apache.org/jira/browse/HADOOP-15165
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15165) Fix typo in YARN documentation

2018-01-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321749#comment-16321749
 ] 

Akira Ajisaka commented on HADOOP-15165:


LGTM, +1 pending Jenkins.

> Fix typo in YARN documentation
> --
>
> Key: HADOOP-15165
> URL: https://issues.apache.org/jira/browse/HADOOP-15165
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15165) Fix typo in YARN documentation

2018-01-10 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15165:
--
Status: Patch Available  (was: Open)

> Fix typo in YARN documentation
> --
>
> Key: HADOOP-15165
> URL: https://issues.apache.org/jira/browse/HADOOP-15165
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15165) Fix typo in YARN documentation

2018-01-10 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15165:
--
Attachment: HADOOP-15165.1.patch

Uploaded the 1st patch.

> Fix typo in YARN documentation
> --
>
> Key: HADOOP-15165
> URL: https://issues.apache.org/jira/browse/HADOOP-15165
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HADOOP-15165.1.patch
>
>
> The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15165) Fix typo in YARN documentation

2018-01-10 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HADOOP-15165:
-

 Summary: Fix typo in YARN documentation
 Key: HADOOP-15165
 URL: https://issues.apache.org/jira/browse/HADOOP-15165
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
Priority: Minor


The link of "YARN Federation" is wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321714#comment-16321714
 ] 

Aaron Fabbri commented on HADOOP-15079:
---

Integration tests all passed with S3Guard (local) and without in us-west-2. No 
further comments on the patch.

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> 

[jira] [Assigned] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-01-10 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-14927:
-

Assignee: Aaron Fabbri

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-01-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321710#comment-16321710
 ] 

Aaron Fabbri commented on HADOOP-14927:
---

Just hit this again so I will put up a patch soon... Message changed slightly:

{noformat}
Failures: 
[ERROR]   
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:231 
Expected a java.io.FileNotFoundException to be thrown, but got the result: : 0
[ERROR]   
ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:231 
Expected a java.io.FileNotFoundException to be thrown, but got the result: : 0
[INFO] 
{noformat}

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: Aaron Fabbri
>Priority: Minor
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-01-10 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-14927:
--
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14825

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3
>Reporter: Aaron Fabbri
>Priority: Minor
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2018-01-10 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321696#comment-16321696
 ] 

Miklos Szegedi commented on HADOOP-14597:
-

I verified and the patch still compiles on branch-2 and an old version like 
centos 6.4

> Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been 
> made opaque
> 
>
> Key: HADOOP-14597
> URL: https://issues.apache.org/jira/browse/HADOOP-14597
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
> Environment: openssl-1.1.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14597.00.patch, HADOOP-14597.01.patch, 
> HADOOP-14597.02.patch, HADOOP-14597.03.patch, HADOOP-14597.04.patch
>
>
> Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
> with this error
> {code}[WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
>  In function ‘check_update_max_output_len’:
> [WARNING] 
> /home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
>  error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
> evp_cipher_ctx_st}’
> [WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
> [WARNING]   ^~
> {code}
> https://github.com/openssl/openssl/issues/962 mattcaswell says
> {quote}
> One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
> version is that many types have been made opaque, i.e. applications are no 
> longer allowed to look inside the internals of the structures
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321648#comment-16321648
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user aajisaka commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160858464
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -43,6 +49,9 @@
   public static final int CHECKSUM_CRC32C  = 2;
   public static final int CHECKSUM_DEFAULT = 3; 
   public static final int CHECKSUM_MIXED   = 4;
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(DataChecksum.class);
+  private static volatile boolean useJava9Crc32C = 
Shell.isJavaVersionAtLeast(9);
--- End diff --

@steveloughran @dchuyko The flag `useJava9Crc32C` can be final and I'm 
thinking changing it final also avoids synchronization. What do you think?

Sorry for very late.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.010.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-01-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321643#comment-16321643
 ] 

Aaron Fabbri commented on HADOOP-13761:
---

Reminder comment here: We should also:

- Update MetadataStore javadoc to say that implementations *must* implement 
their own retries as needed to recover from transient issues like services 
unavailable or network connectivity blips. (The most sane design decision IMO.. 
especially if more backends are added in the future).

- Update any S3Guard-specific retry annotations in S3A FS to assume 
MetadataStores do their own retries.  e.g. the 
{{Retries.OnceTranslated("s3guard not retrying")}} being added to 
listLocatedStatus() in HADOOP-15079.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Steve Loughran
>Priority: Blocker
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15119) AzureNativeFileSystemStore's rename swallow InterruptedExceptions

2018-01-10 Thread Yin Huai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321534#comment-16321534
 ] 

Yin Huai commented on HADOOP-15119:
---

cc [~tmarquardt]

> AzureNativeFileSystemStore's rename swallow InterruptedExceptions
> -
>
> Key: HADOOP-15119
> URL: https://issues.apache.org/jira/browse/HADOOP-15119
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Yin Huai
>
> AzureNativeFileSystemStore's rename calls 
> [waitForCopyToComplete|https://github.com/apache/hadoop/blob/d69b7358b65128197b4c0fe5ef3c02f3d59864b3/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java#L2742],
>  which swallow InterruptedExceptions and prevent the current thread being 
> from interrupted. Once we catch the exception, it will be nice to call 
> Thread.currentThread().interrupt(). So, if this thread is blocked at a later 
> time, an InterruptedException will be properly thrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321527#comment-16321527
 ] 

Aaron Fabbri commented on HADOOP-15079:
---

Fragile bits of code.. thanks for digging in to this.

Curious.. why is {{copyFile()}} {{Mixed}} instead of {{Translated}}?

Me: Looks gift horse in the mouth "How did these unrelated annotation changes 
get in here? ;-)
 
{noformat}
-  @Retries.OnceTranslated
+  @Retries.OnceTranslated("s3guard not retrying")
{noformat}

Not always retrying, yeah, until we finish HADOOP-13761. I'll add a reminder 
comment there to clean this up.

Also, below this, you remove the try / catch / translateException() outer block 
but claim it is still {{Translated}}
{noformat}
   public RemoteIterator listLocatedStatus(final Path f,
   final PathFilter filter)
   throws FileNotFoundException, IOException {

-try {

-listing.createFileStatusListingIterator(path,
-createListObjectsRequest(key, "/"),

-} catch (AmazonClientException e) {
-  throw translateException("listLocatedStatus", path, e);
{noformat}

Looking at createFileStatusListingIterator() above, it creates an 
ObjectListingIterator() which calls the {{RetryRaw}}-annotated listObjects(). 
It does indeed appear to be Raw thus I think you've changed listLocatedStatus() 
to Mixed.

{noformat}
  // issue: do we need to do this, given that createFakeDirectory will do 
that?
  S3Guard.makeDirsOrdered(metadataStore, metadataStoreDirs, username, true);
{noformat}

Humm.. Now that finishedWrite() does S3Guard.addAncestors() (IIRC it did not 
originally), maybe we don't need this and the whole makeDirsOrdered() thing can 
go away. A FS with real directories would still probably want to use that 
function but .. it'll always be in git history.  I'm fine with trying it out 
here, or doing a separate JIRA, 'sup to you.

{noformat}
+  boolean outcome = innerDelete(innerGetFileStatus(f, true), recursive);
+  maybeCreateFakeParentDirectory(f);
+  return outcome;
{noformat}
Do you want an {{if (outcome) }} condition around maybeCreateFakeParent()?  I 
guess it does its own check for root so probably not necessary.

Moving this out of innerDelete() looks good.

{quote}
fixed up equality check bug in rename() which was comparing parent paths on 
object identity, not path, so not skipping if src and dest were in same dir 
unless they shared a path instance (which in tests, they did!)
{quote}
Wow, good catch.  Looks like that has been there since S3A was committed.
Your tests sharing a path instance implies the you were renaming from root dir 
to root dir (the only case {{Path#getParent()}} doesn't allocate a new 
{{Path}}). Sound right?

{noformat}
-  // this is complicated because getParent(a/b/c/) returns a/b/c, but
-  // we want a/b. See HADOOP-14428 for more details.
-  deleteUnnecessaryFakeDirectories(new Path(f.toString()).getParent());
   return true;
 }
{noformat}
Ok. I confirmed that {{S3AFileSystem.qualify(Path)}} does strip trailing slash 
so you are not reintroducing HADOOP-14428.

Off to dinner.. Will test and finish review later.









> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> 

[jira] [Resolved] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2018-01-10 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HADOOP-12928.

Resolution: Duplicate

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, 
> HADOOP-12928-branch-2.01.patch, HADOOP-12928-branch-2.02.patch, 
> HADOOP-12928.01.patch, HADOOP-12928.02.patch, HADOOP-12928.03.patch, 
> HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321310#comment-16321310
 ] 

Sean Mackrory commented on HADOOP-15079:


Your explanation sounds good to me. Sure enough tests pass without S3Guard, but 
with S3Guard they're not doing so great:

{code}
ITestS3GuardListConsistency.testInconsistentS3ClientDeletes:527->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 InconsistentAmazonS3Client added back objects incorrectly in a non-recursive 
listing expected:<3> but was:<0>
ITestS3AContractMkdir>AbstractContractMkdirTest.testMkdirOverParentFile:103->AbstractFSContractTestBase.assertIsFile:316
 » FileNotFound
ITestS3AContractMkdir>AbstractContractMkdirTest.testMkdirsDoesNotRemoveParentDirectories:164
 » 
ITestS3AContractGetFileStatusV1List>AbstractContractGetFileStatusTest.testComplexDirActions:146->AbstractContractGetFileStatusTest.checkListFilesComplexDirRecursive:242->AbstractContractGetFileStatusTest.verifyFileStats:444
 » FileNotFound
ITestS3AEncryptionSSEKMSDefaultKey>AbstractTestS3AEncryption.testEncryption:57->AbstractTestS3AEncryption.validateEncryptionForFilesize:79->AbstractS3ATestBase.writeThenReadFile:134->AbstractS3ATestBase.writeThenReadFile:147
 » FileNotFound
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:231->AbstractS3GuardToolTestBase.run:95
 » IllegalArgument
ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:231->AbstractS3GuardToolTestBase.run:95
 » IllegalArgument
{code}

Can do a more in-depth review later.

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321169#comment-16321169
 ] 

genericqa commented on HADOOP-15079:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 
new + 6 unchanged - 1 fixed = 10 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15079 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905531/HADOOP-15079-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 80ef0fafe33b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13950/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13950/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13950/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321080#comment-16321080
 ] 

Steve Loughran commented on HADOOP-15079:
-

HADOOP-15079 Patch 0001: reduce number of superfluous delete and mkdir ops in 
renames

* innerDelete() doesn't do the mkdirs, pulled up to delete()
* mkdirs() skips superflous delete of fake dirs
* fixed up equality check bug in rename() which was comparing parent paths on 
object identity, not path, so not skipping if src and dest were in same dir 
unless they shared a path instance (which in tests, they did!)
* test that too
* shiny lambda-8 stuff to make extended test more readable, include full 
metrics on assert failures
* marked up some more S3A and Listing operations with retry semantics, 
including tune copyFile

Issue: I don't see why mkdirs() needs to S3Guard.makeDirsOrdered(metadataStore, 
metadataStoreDirs, username, true); given that createFakeDirectory will do it 
by way of Putf -> finishedWrite -> S3Guard.addAncestors. [~fabbri]: what do you 
say? 

Issue 2: The progress callback in copyFile. We could make that a singleton of 
the class; all it does is update a counter. Worthwhile?

Testing: S3 ireland, without s3guard.

Anyway, yes: too many deletes. Something to track.

FWIW, I'm not sure this is all *just* a regression from the committer. The 
equality check bug has probably been around a while, we just never noticed as 
we've never stepped through the code line by line in a debugger trying to work 
out why delete was happening.

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at 

[jira] [Updated] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15079:

Status: Patch Available  (was: Open)

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> 

[jira] [Updated] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15079:

Attachment: HADOOP-15079-001.patch

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> 

[jira] [Assigned] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15079:
---

Assignee: Steve Loughran

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-15079-001.patch
>
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> 

[jira] [Commented] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321037#comment-16321037
 ] 

genericqa commented on HADOOP-15129:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 174 unchanged - 0 fixed = 175 total (was 174) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905521/HADOOP-15129.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47dbfee78bbd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12d0645 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13949/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13949/testReport/ |
| Max. process+thread count | 1467 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13949/console |
| Powered by | Apache Yetus 

[jira] [Updated] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2018-01-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-15060:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.1
  2.10.0
  3.1.0
Target Version/s: 3.1.0, 2.10.0, 3.0.1
  Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.0 and branch-2. Thank you for the review 
[~yufeigu]!

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7553.000.patch
>
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2018-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320906#comment-16320906
 ] 

Hudson commented on HADOOP-15060:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13479 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13479/])
HADOOP-15060. (szegedim: rev 12d0645990a878f78216235c800ae4e157796160)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java


> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7553.000.patch
>
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-01-10 Thread Karthik Palaniappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palaniappan updated HADOOP-15129:
-
Attachment: HADOOP-15129.002.patch

Fair enough, patch 002 uses a Java 7-style anonymous class with LambdaTestUtils.

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode startup, as 
> this fixes temporary DNS issues for all of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15162) UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE

2018-01-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-15162.

Resolution: Not A Problem

Close this as not a problem.  Bad assumption for SIMPLE security mode doesn't 
check for proxy ACL.  I verified that SIMPLE security mode also checks for 
proxy ACL.  UGI.createRemoteUser(remoteUser) has no effect to proxy ACL check.  
Thanks to [~jlowe] and [~daryn] for advices and recommendations.

> UserGroupInformation.createRemoteUser hardcode authentication method to SIMPLE
> --
>
> Key: HADOOP-15162
> URL: https://issues.apache.org/jira/browse/HADOOP-15162
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Eric Yang
>
> {{UserGroupInformation.createRemoteUser(String user)}} is hard coded 
> Authentication method to SIMPLE by HADOOP-10683.  This by passed proxyuser 
> ACL check, isSecurityEnabled check, and allow caller to impersonate as 
> anyone.  This method could be abused in the main code base, which can cause 
> part of Hadoop to become insecure without proxyuser check for both SIMPLE or 
> Kerberos enabled environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320728#comment-16320728
 ] 

genericqa commented on HADOOP-15141:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 56s{color} 
| {color:red} root generated 2 new + 1240 unchanged - 0 fixed = 1242 total (was 
1240) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 2 new + 15 unchanged - 
0 fixed = 17 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15141 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905496/HADOOP-15141-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 9e931c66c2e2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-15006) Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS

2018-01-10 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320663#comment-16320663
 ] 

Steve Moist commented on HADOOP-15006:
--

{quote}
you are going to have to handle the fact that its only on the final post where 
things are manifest later, with a separate process/host doing the final POST. 
that's when any info would have to be persisted to DDB
{quote}
That seems reasonable.  I had thought to store the OEMI first and have it act 
as a the lock to prevent multiple uploads from confusing who's OEMI is theirs.  
By inserting it first, it's hard to tell whether or not the object is still 
being uploaded or the host/process/jvm/etc has crashed and abandoned the 
upload.  There's more to discuss than I originally had thought.

{quote}
One thing to consider is that posix seek() lets you do a relative-to-EOF seek. 
Could we offer that, or at least an API call to get the length of a data 
source, in our input stream. And then modify the core file formats (Sequence, 
ORC, Parquet...) to use the length returned in the open call, rather than a 
previously cached value?
{quote}

Can you give me a sample scenario to help visualize what your trying to solve 
by this.

> Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS
> ---
>
> Key: HADOOP-15006
> URL: https://issues.apache.org/jira/browse/HADOOP-15006
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3, kms
>Reporter: Steve Moist
>Priority: Minor
> Attachments: S3-CSE Proposal.pdf
>
>
> This is for the proposal to introduce Client Side Encryption to S3 in such a 
> way that it can leverage HDFS transparent encryption, use the Hadoop KMS to 
> manage keys, use the `hdfs crypto` command line tools to manage encryption 
> zones in the cloud, and enable distcp to copy from HDFS to S3 (and 
> vice-versa) with data still encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320629#comment-16320629
 ] 

Steve Loughran commented on HADOOP-15079:
-

BTW, looking at the code I can see we get into trouble if the #of elements in 
the path > the page size of a DELETE call, so, what 5000 elements deep?

> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74
> {code}
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> 

[jira] [Commented] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320554#comment-16320554
 ] 

ASF GitHub Bot commented on HADOOP-15163:
-

Github user andrioni closed the pull request at:

https://github.com/apache/hadoop/pull/322


> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Assignee: Alessandro Andrioni
>Priority: Minor
> Fix For: 3.1.0
>
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15079) ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320550#comment-16320550
 ] 

Steve Loughran commented on HADOOP-15079:
-

This is what I'm seeing

# we are calling maybeCreateFakeParentDirectory(src)  twice if the source is a 
file: once when deleted, once at the end of everything
# and innerMkdirs() is also issuing 
{{deleteUnnecessaryFakeDirectories(parent)}} it doesn't need to as the PUT  of 
an empty file will have done that already in {{finishedWrite}}.

{{finishedWrite}} also goes through the MD repository and creates the parent 
entries. So I believe we can cut not just the delete 
{{deleteUnnecessaryFakeDirectories(parent)}}, but also 
{{S3Guard.makeDirsOrdered}}. I'll review that very carefully to be confident 
its good though. IF it isn't, well, that means {{finishedWrite()}} isn't doing 
its work


> ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after 
> OutputCommitter patch
> ---
>
> Key: HADOOP-15079
> URL: https://issues.apache.org/jira/browse/HADOOP-15079
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Priority: Critical
>
> I see this test failing with "object_delete_requests expected:<1> but 
> was:<2>". I printed stack traces whenever this metric was incremented, and 
> found the root cause to be that innerMkdirs is now causing two calls to 
> delete fake directories when it previously caused only one. It is called once 
> inside createFakeDirectory, and once directly inside innerMkdirs later:
> {code}
> at 
> org.apache.hadoop.fs.s3a.S3AInstrumentation.incrementCounter(S3AInstrumentation.java:454)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1108)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$8(S3AFileSystem.java:1369)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:279)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1366)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1625)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2634)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2599)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1498)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$11(S3AFileSystem.java:2684)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:108)
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:259)
> at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:313)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:255)
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:230)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:2682)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:2657)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2021)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1956)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2305)
> at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:209)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> 

[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Attachment: HADOOP-15141-005.patch

Patch 005.

Fixes typo and indentation.

Not fixed: the 84 char wide lines; no real need

Not fixed: use of deprecated SDK methods. I did start this but ended up staging 
the changes as it was getting far too convoluted. Instead of *a builder you 
configured* it moved to *a builder you had to configure with some other 
builder-instantiated class plus some some structures you created*. I'd got as 
far as having the two separate builders being done in parallel with some other 
objects before concluding that it was actually making the code worse in terms 
of readability and hence maintainability. The existing builder will create the 
other classes it needs in its .build() operation, so my stance is: let it do so.



> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Status: Patch Available  (was: Open)

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch, HADOOP-15141-005.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15141:

Status: Open  (was: Patch Available)

> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15076) s3a troubleshooting to add "things don't work after I dropped in a new AWS SDK JAR"

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320501#comment-16320501
 ] 

genericqa commented on HADOOP-15076:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15076 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905476/HADOOP-15076-002.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 956ce35b6f7b 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98a2e6 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13947/artifact/out/whitespace-eol.txt
 |
| Max. process+thread count | 312 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13947/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3a troubleshooting to add "things don't work after I dropped in a new AWS 
> SDK JAR"
> ---
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Abraham Fine
> Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320492#comment-16320492
 ] 

Hudson commented on HADOOP-15163:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13478/])
HADOOP-15163. Fix S3ACommitter documentation Contributed by Alessandro (stevel: 
rev 1a09da7400295cba7f9a39b58eb8fff735222935)
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md


> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Assignee: Alessandro Andrioni
>Priority: Minor
> Fix For: 3.1.0
>
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320479#comment-16320479
 ] 

genericqa commented on HADOOP-15151:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 50s{color} 
| {color:red} root generated 1 new + 1240 unchanged - 0 fixed = 1241 total (was 
1240) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 17 new + 125 unchanged - 1 fixed = 142 total (was 126) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 24 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905453/HADOOP-15151.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba4d987f59a2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98a2e6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13946/artifact/out/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13946/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13946/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13946/testReport/ |
| Max. 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-01-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320470#comment-16320470
 ] 

Rushabh S Shah commented on HADOOP-14445:
-

Attaching a new patch for trunk which fixes the incompatibility  that were 
discussed in previous comments and addressing the review comments made by Xiao 
on the previous patch.
Added a bunch of test cases to verify compatibility.
bq. I think we can improve the comments with dtService to be clearer. Suggest 
something along the lines of:
Addressed in latest patch.

bq. Now that we handle the port stuff nicely in unit tests, 
fallbackDefaultPortForTesting and related logic can be removed.
Addressed in latest patch.

bq. On renew and cancel, suggest to add a debug log when keyProvider == null 
for supportability.
Addressed in latest patch.

bq. Let's use HADOOP_SECURITY_KEY_PROVIDER_PATH instead of 
KeyProviderFactory.KEY_PROVIDER_PATH.
Addressed in latest patch.

bq. When createKeyProviderForTests returns non-null value (before return kp), 
add a info log, since this should only happen in tests
Addressed in latest patch.


bq. doKMSWithZKWithDelegationToken, do we need to loop through the tokens and 
verify? After this fix, there would only be 1 kms-dt mapping to the entire url 
right? IMO we should verify there's just 1 kms-dt.
Addressed in latest patch.

bq. doKMSWithZKWithDelegationToken, besides verifying renewal, we should also 
verify some key operations.
This patch has nothing to do with key operations. Key shell commands don't use 
delegation tokens. They use kerberos tickets.
This jira is only changing delegation token handling part. The existing key 
shell tests are enough.
If you think existing tests are not enough, then please open a new jira to 
cover that.

bq. Happy to see the compat test, thanks! We should also verify some key 
operations too here.
Same comment as last one.

bq.HdfsKMSUtil: Looks like we can remove the not-used createKeyProvider method.
This method is getting called from 
{{DFSUtil#createKeyProviderCryptoExtension}}. Namenode calls this method to 
create key provider.
So we cannot remove this method.
[~xiaochen]: please review.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, HADOOP-14445.003.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320457#comment-16320457
 ] 

Steve Loughran commented on HADOOP-15163:
-

+can you close the github PR now; I can't do that. thanks

> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Assignee: Alessandro Andrioni
>Priority: Minor
> Fix For: 3.1.0
>
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15163.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

LGTM, +1, committed. thanks

> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Assignee: Alessandro Andrioni
>Priority: Minor
> Fix For: 3.1.0
>
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15163:
---

Assignee: Alessandro Andrioni

> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Assignee: Alessandro Andrioni
>Priority: Minor
> Fix For: 3.1.0
>
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15163) Fix S3ACommitter documentation

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15163:

Component/s: fs/s3
 documentation

> Fix S3ACommitter documentation
> --
>
> Key: HADOOP-15163
> URL: https://issues.apache.org/jira/browse/HADOOP-15163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Alessandro Andrioni
>Priority: Minor
>
> The current version of the documentation uses {{fs.s3a.committer.tmp.path}} 
> instead of {{fs.s3a.committer.staging.tmp.path}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15076) s3a troubleshooting to add "things don't work after I dropped in a new AWS SDK JAR"

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15076:

Status: Patch Available  (was: Open)

> s3a troubleshooting to add "things don't work after I dropped in a new AWS 
> SDK JAR"
> ---
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Abraham Fine
> Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15076) s3a troubleshooting to add "things don't work after I dropped in a new AWS SDK JAR"

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15076:

Attachment: HADOOP-15076-002.patch

patch 002; patch 001 with tabs -> spaces on the pasted in stack, and a typo 
fixed in a new paragraph

> s3a troubleshooting to add "things don't work after I dropped in a new AWS 
> SDK JAR"
> ---
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Abraham Fine
> Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15006) Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320399#comment-16320399
 ] 

Steve Loughran commented on HADOOP-15006:
-

* you are going to have to handle the fact that its only on the final post 
where things are manifest later, with a separate process/host doing the final 
POST. that's when any info would have to be persisted to DDB
* There is a length http header on encrypted files; these could be scanned to 
repopulate the s3guard tables

One thing to consider is that posix seek() lets you do a relative-to-EOF seek. 
Could we offer that, or at least an API call to get the length of a data 
source, in our input stream. And then modify the core file formats (Sequence, 
ORC, Parquet...) to use the length returned in the open call, rather than a 
previously cached value?

> Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS
> ---
>
> Key: HADOOP-15006
> URL: https://issues.apache.org/jira/browse/HADOOP-15006
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3, kms
>Reporter: Steve Moist
>Priority: Minor
> Attachments: S3-CSE Proposal.pdf
>
>
> This is for the proposal to introduce Client Side Encryption to S3 in such a 
> way that it can leverage HDFS transparent encryption, use the Hadoop KMS to 
> manage keys, use the `hdfs crypto` command line tools to manage encryption 
> zones in the cloud, and enable distcp to copy from HDFS to S3 (and 
> vice-versa) with data still encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15076) s3a troubleshooting to add "things don't work after I dropped in a new AWS SDK JAR"

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15076:

Status: Open  (was: Patch Available)

> s3a troubleshooting to add "things don't work after I dropped in a new AWS 
> SDK JAR"
> ---
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Abraham Fine
> Attachments: HADOOP-15076-001.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15141) Support IAM Assumed roles in S3A

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320338#comment-16320338
 ] 

Steve Loughran commented on HADOOP-15141:
-

* I wasn't worried about line length as its only a hint, and was cleaner to 
leave @ 83-84 chars than split. wontfix there, I'm afraid
* will review the docs again
* w.r.t the deprecation, yeah. AWS have changed that SDK in a way which makes 
it a bit more painful to use. I'l see what I can do there, in particular, make 
it a bit easier to support assumed roles via web identity/SAML, which aren't 
things I'm currently looking at, but would round it out (though there's still 
the issue of "how to pass federation/Oauth secrets around", so it's not 
seamless)


> Support IAM Assumed roles in S3A
> 
>
> Key: HADOOP-15141
> URL: https://issues.apache.org/jira/browse/HADOOP-15141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-15141-001.patch, HADOOP-15141-002.patch, 
> HADOOP-15141-003.patch, HADOOP-15141-004.patch
>
>
> Add the ability to use assumed roles in S3A
> * Add a property fs.s3a.assumed.role.arn for the ARN of the assumed role
> * add a new provider which grabs that and other properties and then creates a 
> {{STSAssumeRoleSessionCredentialsProvider}} from it.
> * This also needs to support building up its own list of aws credential  
> providers, from a different property; make the changes to S3AUtils for that
> * Tests
> * docs
> * and have the AwsProviderList forward closeable to it.
> * Get picked up automatically by DDB/s3guard



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.

2018-01-10 Thread Grigori Rybkine (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320306#comment-16320306
 ] 

Grigori Rybkine commented on HADOOP-15151:
--

I do not seem to be able to assign the issue to myself. Can you please assign 
it to me? Thanks.

> MapFile.fix creates a wrong index file in case of block-compressed data file.
> -
>
> Key: HADOOP-15151
> URL: https://issues.apache.org/jira/browse/HADOOP-15151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Grigori Rybkine
>  Labels: patch
> Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch
>
>
> Index file created with MapFile.fix for an ordered block-compressed data file 
> does not allow to find values for keys existing in the data file via the 
> MapFile.get method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.

2018-01-10 Thread Grigori Rybkine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grigori Rybkine updated HADOOP-15151:
-
Status: Patch Available  (was: Open)

The patch is ready for review.

> MapFile.fix creates a wrong index file in case of block-compressed data file.
> -
>
> Key: HADOOP-15151
> URL: https://issues.apache.org/jira/browse/HADOOP-15151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Grigori Rybkine
>  Labels: patch
> Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch
>
>
> Index file created with MapFile.fix for an ordered block-compressed data file 
> does not allow to find values for keys existing in the data file via the 
> MapFile.get method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15151) MapFile.fix creates a wrong index file in case of block-compressed data file.

2018-01-10 Thread Grigori Rybkine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grigori Rybkine updated HADOOP-15151:
-
Attachment: HADOOP-15151.002.patch

This patch adds a Unit test for the issue (and its fix) and is ready for review.

> MapFile.fix creates a wrong index file in case of block-compressed data file.
> -
>
> Key: HADOOP-15151
> URL: https://issues.apache.org/jira/browse/HADOOP-15151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Grigori Rybkine
>  Labels: patch
> Attachments: HADOOP-15151.001.patch, HADOOP-15151.002.patch
>
>
> Index file created with MapFile.fix for an ordered block-compressed data file 
> does not allow to find values for keys existing in the data file via the 
> MapFile.get method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320245#comment-16320245
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on the issue:

https://github.com/apache/hadoop/pull/291
  
A copy of the patch from the pull request was attached as 
HADOOP-15033.010.patch in Jira. The patch passed genericqa checks and shows 
same performance improvement a before. Please review this latest version of 
pull request.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.010.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13738) DiskChecker should perform some disk IO

2018-01-10 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13738:
---
   Resolution: Fixed
Fix Version/s: 2.8.4
   Status: Resolved  (was: Patch Available)

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.4, 3.0.0-alpha2, 2.9.0
>
> Attachments: HADOOP-13738-branch-2.8-06.patch, HADOOP-13738.01.patch, 
> HADOOP-13738.02.patch, HADOOP-13738.03.patch, HADOOP-13738.04.patch, 
> HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2018-01-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320214#comment-16320214
 ] 

Vinayakumar B commented on HADOOP-13738:


Cherry-picked to branch-2.8.
Thanks [~brahmareddy] for review and thanks [~arpitagarwal] for original 
contribution.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13738-branch-2.8-06.patch, HADOOP-13738.01.patch, 
> HADOOP-13738.02.patch, HADOOP-13738.03.patch, HADOOP-13738.04.patch, 
> HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320212#comment-16320212
 ] 

genericqa commented on HADOOP-15033:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 1 new + 200 unchanged 
- 0 fixed = 201 total (was 200) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15033 |
| GITHUB PR | https://github.com/apache/hadoop/pull/291 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| 

[jira] [Commented] (HADOOP-15142) Register FTP and SFTP as FS services

2018-01-10 Thread Mario Molina (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320210#comment-16320210
 ] 

Mario Molina commented on HADOOP-15142:
---

I came across with some use cases in which the core-default.xml and others 
config xml files were already configured and I couldn't modify them to include 
it (due to admin/ops stuff).
It'd be interesting if this could be possible via dynamic but I don't know what 
sort of issues you've had. Anyway, I think I have to complete the 
core-default.xml including the SFTP fs.
What do you say?


> Register FTP and SFTP as FS services
> 
>
> Key: HADOOP-15142
> URL: https://issues.apache.org/jira/browse/HADOOP-15142
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.0.0
>Reporter: Mario Molina
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: HADOOP-15142.001.patch
>
>
> SFTPFileSystem and FTPFileSystem are not registered as a FS services.
> When calling the 'get' or 'newInstance' methods of the FileSystem class, the 
> FS instance cannot be created due to the schema is not registered as a 
> service FS.
> Also, the SFTPFileSystem class doesn't have the getScheme method implemented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320150#comment-16320150
 ] 

genericqa commented on HADOOP-15027:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905436/HADOOP-15027.012.patch
 |
| Optional Tests |  asflicense  findbugs  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  checkstyle  |
| uname | Linux 5b14355b8039 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98a2e6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13944/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13944/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: Support 

[jira] [Commented] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320125#comment-16320125
 ] 

Steve Loughran commented on HADOOP-15129:
-

afraid not with try-with-resource. you can use lambdas with anonymous classes, 
as it lines you up for moving to Java 8 with ease in future; the IDE can take a 
class like that and switch to a Lambda at a button press.

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode startup, as 
> this fixes temporary DNS issues for all of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread Dmitry Chuyko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Chuyko updated HADOOP-15033:
---
Attachment: HADOOP-15033.010.patch

> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.010.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320120#comment-16320120
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160654107
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
 ---
@@ -86,7 +86,15 @@
*/
   @Deprecated
   public static boolean isJava7OrAbove() {
-return true;
+return isJavaSpecAtLeast(7);
--- End diff --

Ok, reverted.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320121#comment-16320121
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160654155
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java
 ---
@@ -86,7 +86,15 @@
*/
   @Deprecated
   public static boolean isJava7OrAbove() {
-return true;
+return isJavaSpecAtLeast(7);
+  }
+
+  // "1.8"->8, "9"->9, "10"->10
+  private static final int JAVA_SPEC_VER = Math.max(8, Integer.parseInt(
+  System.getProperty("java.specification.version").split("\\.")[0]));
+
+  public static boolean isJavaSpecAtLeast(int version) {
--- End diff --

Done all 3.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320119#comment-16320119
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160654062
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -78,6 +87,18 @@ public static Checksum newCrc32() {
 return new CRC32();
   }
 
+  public static Checksum newCrc32C() {
--- End diff --

Made it package visible to keep it accessible from Crc32PerformanceTest. 
Note, newCrc32() is still public. Added javadoc to the method.


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2018-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320117#comment-16320117
 ] 

ASF GitHub Bot commented on HADOOP-15033:
-

Github user dchuyko commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/291#discussion_r160653684
  
--- Diff: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
 ---
@@ -43,6 +49,9 @@
   public static final int CHECKSUM_CRC32C  = 2;
   public static final int CHECKSUM_DEFAULT = 3; 
   public static final int CHECKSUM_MIXED   = 4;
+
+  public static final Logger LOG = 
LoggerFactory.getLogger(DataChecksum.class);
--- End diff --

Made private. It was copied from the wrong place (1/3 cases in 
hadoop-common btw.)


> Use java.util.zip.CRC32C for Java 9 and above
> -
>
> Key: HADOOP-15033
> URL: https://issues.apache.org/jira/browse/HADOOP-15033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, util
>Affects Versions: 3.0.0
>Reporter: Dmitry Chuyko
>  Labels: Java9, common, jdk9
> Attachments: HADOOP-15033.001.patch, HADOOP-15033.001.patch, 
> HADOOP-15033.002.patch, HADOOP-15033.003.patch, HADOOP-15033.003.patch, 
> HADOOP-15033.004.patch, HADOOP-15033.005.diff, HADOOP-15033.005.diff, 
> HADOOP-15033.005.diff, HADOOP-15033.005.patch, HADOOP-15033.006.patch, 
> HADOOP-15033.007.patch, HADOOP-15033.007.patch, HADOOP-15033.008.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch, HADOOP-15033.009.patch, HADOOP-15033.009.patch, 
> HADOOP-15033.009.patch
>
>
> java.util.zip.CRC32C implementation is available since Java 9.
> https://docs.oracle.com/javase/9/docs/api/java/util/zip/CRC32C.html
> Platform specific assembler intrinsics make it more effective than any pure 
> Java implementation.
> Hadoop is compiled against Java 8 but class constructor may be accessible 
> with method handle on 9 to instances implementing Checksum in runtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-10 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320103#comment-16320103
 ] 

wujinhu edited comment on HADOOP-15027 at 1/10/18 11:32 AM:


Change some default configurations.

{code:java}
-  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 1024 * 1024;
+  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 512 * 1024;

   public static final String MULTIPART_DOWNLOAD_THREAD_NUMBER_KEY =
   "fs.oss.multipart.download.threads";
-  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 16;
+  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 10;

   public static final String MAX_TOTAL_TASKS_KEY = "fs.oss.max.total.tasks";
   public static final int MAX_TOTAL_TASKS_DEFAULT = 128;

   public static final String MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_KEY =
   "fs.oss.multipart.download.ahead.part.max.number";
-  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 8;
+  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 4;
{code}



was (Author: wujinhu):
Change some default configuration.

{code:java}
-  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 1024 * 1024;
+  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 512 * 1024;

   public static final String MULTIPART_DOWNLOAD_THREAD_NUMBER_KEY =
   "fs.oss.multipart.download.threads";
-  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 16;
+  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 10;

   public static final String MAX_TOTAL_TASKS_KEY = "fs.oss.max.total.tasks";
   public static final int MAX_TOTAL_TASKS_DEFAULT = 128;

   public static final String MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_KEY =
   "fs.oss.multipart.download.ahead.part.max.number";
-  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 8;
+  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 4;
{code}


> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-10 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320103#comment-16320103
 ] 

wujinhu edited comment on HADOOP-15027 at 1/10/18 11:28 AM:


Change some default configuration.

{code:java}
-  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 1024 * 1024;
+  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 512 * 1024;

   public static final String MULTIPART_DOWNLOAD_THREAD_NUMBER_KEY =
   "fs.oss.multipart.download.threads";
-  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 16;
+  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 10;

   public static final String MAX_TOTAL_TASKS_KEY = "fs.oss.max.total.tasks";
   public static final int MAX_TOTAL_TASKS_DEFAULT = 128;

   public static final String MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_KEY =
   "fs.oss.multipart.download.ahead.part.max.number";
-  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 8;
+  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 4;
{code}



was (Author: wujinhu):
Change some default configuration.

{code:java}
474c474
< index dd71842fb87..dedc038f3f7 100644
---
> index dd71842fb87..a1070277d33 100644
482c482
< +  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 512 * 1024;
---
> +  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 1024 * 1024;
486c486
< +  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 10;
---
> +  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 16;
493c493
< +  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 4;
---
> +  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 8;
{code}


> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-10 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320103#comment-16320103
 ] 

wujinhu commented on HADOOP-15027:
--

Change some default configuration.

{code:java}
474c474
< index dd71842fb87..dedc038f3f7 100644
---
> index dd71842fb87..a1070277d33 100644
482c482
< +  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 512 * 1024;
---
> +  public static final long MULTIPART_DOWNLOAD_SIZE_DEFAULT = 1024 * 1024;
486c486
< +  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 10;
---
> +  public static final int MULTIPART_DOWNLOAD_THREAD_NUMBER_DEFAULT = 16;
493c493
< +  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 4;
---
> +  public static final int MULTIPART_DOWNLOAD_AHEAD_PART_MAX_NUM_DEFAULT = 8;
{code}


> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-10 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15027:
-
Attachment: HADOOP-15027.012.patch

> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch, HADOOP-15027.010.patch, HADOOP-15027.011.patch, 
> HADOOP-15027.012.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15110) Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15110:

Component/s: metrics

> Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds
> --
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: metrics, security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Harshakiran Reddy
>Priority: Minor
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15110) Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds

2018-01-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15110:

Summary: Gauges are getting logged in exceptions from 
AutoRenewalThreadForUserCreds  (was: Objects are getting logged when we got 
exception from AutoRenewalThreadForUserCreds)

> Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds
> --
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: metrics, security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Harshakiran Reddy
>Priority: Minor
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15110) Objects are getting logged when we got exception from AutoRenewalThreadForUserCreds

2018-01-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320025#comment-16320025
 ] 

Steve Loughran commented on HADOOP-15110:
-

OH see: hidden off the side off the trace. Sorry!

Why not fix the gauges to log their values in toString()? You'd get what you 
want here, and so will everyone else.

I don't see any open JIRAs here, so this can be the one; just change the tilte

Tests: create the gauges, set them to known counters, use assertEquals on the 
toString values

> Objects are getting logged when we got exception from 
> AutoRenewalThreadForUserCreds
> ---
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Harshakiran Reddy
>Priority: Minor
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x

2018-01-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Attachment: HADOOP-14178.005-wip5.patch

005-wip5 patch:
* Rebased
* Undo the change of LICENSE.txt to run unit tests on more modules in precommit 
job
I ran all the tests by qbt and now there are no tests which are failed because 
of the change.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org