[jira] [Commented] (HADOOP-13653) ZKDelegationTokenSecretManager curator client seems to rapidly connect & disconnect from ZK

2016-10-01 Thread Alex Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15539860#comment-15539860
 ] 

Alex Ivanov commented on HADOOP-13653:
--

[~xiaochen], it seems there is already a property to set the connection 
timeout, {{hadoop.security.kms.client.timeout}}, looking at 
{{KMSClientProvider.java}}. If that is indeed the case, perhaps you can close 
the referenced jira.

> ZKDelegationTokenSecretManager curator client seems to rapidly connect & 
> disconnect from ZK
> ---
>
> Key: HADOOP-13653
> URL: https://issues.apache.org/jira/browse/HADOOP-13653
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Alex Ivanov
>Priority: Critical
>
> During periods of time, KMS gets in a connect/disconnect loop from Zookeeper. 
> It is not clear what causes the connection to be closed. I didn't see any 
> issues on the ZK server side, so the issue must reside on client side.
> *Example errors*
> NOTE: I had to filter the logs heavily since they were many GB in size 
> (thanks to curator error logging). What is left is an illustration of the 
> delegation token creations, and the Zookeeper sessions getting lost and 
> re-established over the course of 2 hours.
> {code}
> 2016-09-25 01:43:04,377 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [75027a21ab399aa7789d6907d70fadc4, 46]
> 2016-09-25 01:43:04,557 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [1106d0754d43dcf29324d7be737f51f0, 46]
> 2016-09-25 01:43:11,846 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [4426092c861f49c6ba0c60b49b9539e5, 46]
> 2016-09-25 01:43:48,974 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [a99efff2705d6489deb059098f18818f, 46]
> 2016-09-25 01:43:49,174 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [398b5962fd647880961ba5e86a77b414, 46]
> 2016-09-25 01:44:03,359 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [413187e62a21b5459422b5c524315d06, 46]
> 2016-09-25 01:44:03,625 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [7cc2c0d82edd40e7e6f6f40af20d04d3, 46]
> 2016-09-25 01:44:06,062 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [bd9394fce20607c12bc00104bea49284, 46]
> 2016-09-25 01:44:07,134 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [7dad3bd10526517e5e1cfccd2e96074a, 46]
> 2016-09-25 01:44:07,230 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [a712ed40687580647d070c9c7f525e15, 46]
> 2016-09-25 01:44:48,481 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [44bfefa31192c68e3cc053eec4e57e14, 46]
> 2016-09-25 01:44:48,522 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [67efc2aa65eeba701ad7d3d7bab51def, 46]
> 2016-09-25 01:44:50,259 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [b43e641f58dfbd2c72550ab6804f37d1, 46]
> 2016-09-25 01:44:54,271 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [ac2fbcf404c633759b75e6d6aae00e05, 46]
> 2016-09-25 01:44:56,141 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [cdbd224079a4a10400d00d0b8eece008, 46]
> 2016-09-25 01:45:01,328 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [e03218f4835524f3d05519d27bb04e35, 46]
> 2016-09-25 01:45:02,728 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [569ae6d666d584b6843fffc47a63d147, 46]
> 2016-09-25 01:45:02,832 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [c9048271483da234c12f75569b9513c6, 46]
> 2016-09-25 01:45:05,536 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [f519d621389e41b63e8d92b4cb15f832, 46]
> 2016-09-25 01:45:07,886 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [45cf6ba58b2bb348ac5e88fa18fe9dad, 46]
> 2016-09-25 01:47:24,346 WARN  ConnectionState - Connection attempt 
> unsuccessful after 66294 (greater than max timeout of 6). Resetting 
> connection and trying again with a new connection.
> 2016-09-25 01:47:25,120 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [f160a865db69ef33548f146c9b3b84c6, 46]
> 2016-09-25 01:47:25,276 INFO  AbstractDelegationTokenSecretManager - Creating 
> password for identifier: [9d60add471464e01ef691c43bd901d96, 46]
> 2016-09-25 01:47:28,739 INFO  AbstractDelegationTokenSecretManager - Creating 
> 

[jira] [Updated] (HADOOP-13617) Swift client retrying original request is using expired token after re-authentication

2016-10-01 Thread Yulei Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yulei Li updated HADOOP-13617:
--
Attachment: HADOOP-13617.patch

To test the patch, one should set the fs.swift.token.expired in the 
auth-key.xml to be greater than the expiration time of your environment, and if 
the token expiration time is greater than 900s, one should change the value of 
surefire.fork.timeout in hadoop-project/pom.xml to be greater than the 
expiration time, or the test will failed.

> Swift client retrying original request is using expired token after 
> re-authentication 
> --
>
> Key: HADOOP-13617
> URL: https://issues.apache.org/jira/browse/HADOOP-13617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
>Priority: Blocker
> Attachments: 2016_09_13.stderrout.log, HADOOP-13617.patch
>
>
> library used: org.apache.hadoop:hadoop-openstack:2.6.0
> For long running Swift read operation (e.g., reading a large container), the 
> issued auth token has at most 30 minutes life span from Oracle Storage 
> Service. If the token expired in the middle of the read operation the 
> SwiftRestClient 
> (https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java#L1701)
>  re-authenticate and acquire a new auth token. However, in the retry request 
> the old, expired token is still used, causing the whole operation to fail.
> Because of this bug any meaningful(i.e., long-running) Swift operation is not 
> possible.
> Here is a summary of what happened with DEBUG logging turned on:
> ==
> 1. initial token acquired which will expire on 19:56:44(PDT; UTC-4):
> ---
> 2016-09-13 19:52:37 DEBUG [pool-3-thread-1] SwiftRestClient:268 - setAuth:
> endpoint=https://em2.storage.oraclecloud.com/v1/Storage-paas132;
> objectURI=https://em2.storage.oraclecloud.com/object_endpoint/null;
> token=AccessToken{id='AUTH_tk2dd9d639bbb992089dca008123c3046f',
> tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@af28493,
> expires='2016-09-13T23:56:44Z'}
> 2. token expiration and re-authentication:
> --
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - GET
> https://em2.storage.oraclecloud.com/v1/Storage-paas132/allTaxi/?prefix=000182/&format=json&delimiter=/
> X-Auth-Token: AUTH_tk2dd9d639bbb992089dca008123c3046f
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:44 WARN [pool-3-thread-1] HttpMethodDirector:697 - Unable
> to respond to any of these challenges: {token=Token}
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 401
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1698 -
> Reauthenticating
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1079 - started
> authentication
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1228 -
> Authenticating with Authenticate as tenant 'Storage-paas132' user
> 'radha.sriniva...@oracle.com' with password of length 9
> 2016-09-13 19:56:44 DEBUG [pool-3-thread-1] SwiftRestClient:1727 - POST
> https://em2.storage.oraclecloud.com/auth/v2.0/tokens
> User-Agent: Apache Hadoop Swift Client 2.6.0-cdh5.7.1 from
> ae44a8970a3f0da58d82e0fc65275fff8deabffd by jenkins source checksum
> 298b68dc3b308983f04cb37e8416f13
> .
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1731 - Status
> code = 200
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1149 - Catalog
> entry [swift: object-store];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1156 - Found
> swift catalog as swift => object-store
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1169 - Endpoint
> [US => https://em2.storage.oraclecloud.com/v1/Storage-paas132 / null];
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:268 - setAuth:
> endpoint=https://em2.storage.oraclecloud.com/v1/Storage-paas132;
> objectURI=https://em2.storage.oraclecloud.com/object_endpoint/null;
> token=AccessToken{id='AUTH_tk56bbb4d6fef57b7eeba7acae598f837c',
> tenant=org.apache.hadoop.fs.swift.auth.entities.Tenant@4f03838d,
> expires='2016-09-14T00:26:45Z'}
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:1216 -
> authenticated against https://em2.storage.oraclecloud.com/v1/Storage-paas132.
> 2016-09-13 19:56:45 DEBUG [pool-3-thread-1] SwiftRestClient:

[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15539173#comment-15539173
 ] 

Hadoop QA commented on HADOOP-13669:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 48 new + 1 unchanged - 5 fixed = 49 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-common-project/hadoop-kms generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-kms |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At KMS.java:is not 
thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:[line 169] |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int)  At KMS.java:is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int)  At KMS.java:[line 501] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831230/HADOOP-13369.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9b39b637268b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fe9ebe2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10638/artifact/patchprocess/diff-c

[jira] [Updated] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-01 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13669:
---
Status: Patch Available  (was: Open)

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-01 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated HADOOP-13669:
---
Attachment: HADOOP-13369.patch

Initial draft

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Attachments: HADOOP-13369.patch
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15538511#comment-15538511
 ] 

Steve Loughran commented on HADOOP-13560:
-

HADOOP-13566 highlights that S3AFastOutputStream NPEd on a write of a closed 
stream. Make sure there is a test here for the same action.

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Patch Available  (was: Open)

latest PR addresses chris's and rajesh's comments

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13566) NPE in S3AFastOutputStream.write

2016-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13566.
-
   Resolution: Duplicate
Fix Version/s: 2.9.0

> NPE in S3AFastOutputStream.write
> 
>
> Key: HADOOP-13566
> URL: https://issues.apache.org/jira/browse/HADOOP-13566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
>
> During scale tests, managed to create an NPE
> {code}
> test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate)
>   Time elapsed: 2.258 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132)
> {code}
> trace implies that {{buffer == null}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13566) NPE in S3AFastOutputStream.write

2016-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15538506#comment-15538506
 ] 

Steve Loughran commented on HADOOP-13566:
-

This is going to be obsoleted by HADOOP-13560, though we may want to add a test 
there to see what exception is raised

> NPE in S3AFastOutputStream.write
> 
>
> Key: HADOOP-13566
> URL: https://issues.apache.org/jira/browse/HADOOP-13566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0
>
>
> During scale tests, managed to create an NPE
> {code}
> test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate)
>   Time elapsed: 2.258 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132)
> {code}
> trace implies that {{buffer == null}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2016-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15538324#comment-15538324
 ] 

Steve Loughran commented on HADOOP-11086:
-

I will see if I can look at this, bumping up the version. If it passes the 
tests I will get it in

> Upgrade jets3t to 0.9.2
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HADOOP-11086-v0.patch, HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15538178#comment-15538178
 ] 

Steve Loughran commented on HADOOP-13614:
-

yes, lets do that. Interesting you've seen failures there and I haven't. I have 
had distcp fail on a multipart purge, but that is something addressed in 
HADOOP-13560, so I didn't replicate it.

For those scale tests I've made the the timeout programmable via a system 
property that maven passes in —and the timeout is checked before running the 
big tests. nobody (else) wants to find a 5GB test run failed as maven/junit 
killed it after an hour

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Status: Open  (was: Patch Available)

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org