[jira] [Updated] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox

2017-08-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14188:
---
Attachment: HADOOP-14188.08.patch

08: rebased

> Remove the usage of org.mockito.internal.util.reflection.Whitebox
> -
>
> Key: HADOOP-14188
> URL: https://issues.apache.org/jira/browse/HADOOP-14188
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, 
> HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch, 
> HADOOP-14188.06.patch, HADOOP-14188.07.patch, HADOOP-14188.08.patch
>
>
> org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so 
> we need to remove the usage to upgrade Mockito. Getter/setter method can be 
> used instead of this hack.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14251) Credential provider should handle property key deprecation

2017-08-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128357#comment-16128357
 ] 

John Zhuge commented on HADOOP-14251:
-

[~steve_l] Could you please take a look at patch 003?

> Credential provider should handle property key deprecation
> --
>
> Key: HADOOP-14251
> URL: https://issues.apache.org/jira/browse/HADOOP-14251
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14251.001.patch, HADOOP-14251.002.patch, 
> HADOOP-14251.003.patch
>
>
> The properties with old keys stored in a credential store can not be read via 
> the new property keys, even though the old keys have been deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128356#comment-16128356
 ] 

Hadoop QA commented on HADOOP-14671:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} The patch generated 0 new + 0 unchanged - 104 fixed 
= 0 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  3m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14671 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882084/HADOOP-14671.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 782a1998a8ec 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 588c190 |
| shellcheck | v0.4.6 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13042/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14671:
---
Assignee: Akira Ajisaka
  Status: Patch Available  (was: Open)

> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14671) Upgrade to Apache Yetus 0.5.0

2017-08-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14671:
---
Attachment: HADOOP-14671.001.patch

> Upgrade to Apache Yetus 0.5.0
> -
>
> Key: HADOOP-14671
> URL: https://issues.apache.org/jira/browse/HADOOP-14671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-14671.001.patch
>
>
> Apache Yetus 0.5.0 was released.  Let's upgrade the bundled reference to the 
> new version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.04.patch

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128322#comment-16128322
 ] 

Xiao Chen commented on HADOOP-14705:


Thanks Wei-Chiu for reviewing, good comments! Patch 4 to address all comments, 
with explanations / exception below.

bq. if all ekv are the same, wouldn’t it be more efficient to optimize it 
somehow?
Yes and good catch. Client side has a bug, should not have added keyName to the 
json.
The request looks like below, keyName should only be on the url path,
{noformat}
POST http://HOST:PORT/kms/v1/key//_reencryptbatch
Content-Type: application/json

[
  {
"versionName" : "",
"iv"  : "",//base64
"encryptedKeyVersion" : {
"versionName"   : "EEK",
"material"  : "",//base64
}
  },
  {
"versionName" : "",
"iv"  : "",//base64
"encryptedKeyVersion" : {
"versionName"   : "EEK",
"material"  : "",//base64
}
  },
  ...
]
{noformat}

bq. should the last parameter be Map.class?
The {{response}} is a List, hence List.class.

bq. Question: is there a practical size limit for a KMS request?
Not on the request itself, but the client sending it and the server receiving 
it both need to be able to hold and parse it. As it turned out from HDFS-10899, 
bigger than 2k may trigger edit log sync and impact performance.
For KMS here, I added a static 10k {{maxNumPerBatch}} as a safeguard too. 
Security-wise okay be cause ACL is checked before iterating through the json 
payload.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.04.patch

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: (was: HADOOP-14705.04.patch)

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-15 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128308#comment-16128308
 ] 

John Zhuge commented on HADOOP-14560:
-

Looks like pre-commit test picked PR when both PR and attached patch file exist.

[~aw] What is the best way to move forward? I'd like to run pre-commit on a 
patch file based on the PR.

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2017-08-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128286#comment-16128286
 ] 

Akira Ajisaka commented on HADOOP-14693:


+1 for the option 2. Thanks.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client

2017-08-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14771:
---

Assignee: Ajay Kumar

> hadoop-client does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Haibo Chen
>Assignee: Ajay Kumar
>Priority: Critical
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128218#comment-16128218
 ] 

Ajay Kumar commented on HADOOP-14776:
-

[~steve_l] plz review the patch.

> clean up ITestS3AFileSystemContract
> ---
>
> Key: HADOOP-14776
> URL: https://issues.apache.org/jira/browse/HADOOP-14776
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14776.01.patch
>
>
> With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
> {{ITestS3AFileSystemContract}} which override existing methods just to skip 
> them can be cleaned up: The subclasses could throw assume() so their skippage 
> gets noted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-15 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128169#comment-16128169
 ] 

Aaron Fabbri commented on HADOOP-13998:
---

v4 applied cleanly.  S3A tests w/o S3guard all passed.  Added -Dparallel-tests 
and -Ds3guard and saw some failures (ITestS3AEncryptionSSEC stuff and a couple 
of  ITestS3AContractRootDir).  Rerunning w/o parallel mode then I'll run some 
tests with dynamo.

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2017-08-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128142#comment-16128142
 ] 

Allen Wittenauer commented on HADOOP-12082:
---

Is this going to get documented?

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2-002.patch, HADOOP-12082-branch-2-003.patch, 
> HADOOP-12082-branch-2.8-001.patch, HADOOP-12082-branch-2.8-002.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-14773) Extend ZKCuratorManager API for more reusability

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128138#comment-16128138
 ] 

Hudson commented on HADOOP-14773:
-

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12194/])
HADOOP-14773. Extend ZKCuratorManager API for more reusability. (Íñigo (subu: 
rev 75dd866bfb8b63cb9f13179d4365b05c48e0907d)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestZKCuratorManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java


> Extend ZKCuratorManager API for more reusability
> 
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128133#comment-16128133
 ] 

Thomas Marquardt commented on HADOOP-14583:
---

I usually make targeted fixes, like you have done, to avoid regressions.  
However, in this case I think it would be better to update retrieveMetadata and 
fix the race condition between the calls to exists() and downloadAttributes().  
The retrieveMetadata method already handles the case where the blob does not 
exist, so if you catch the exception from downloadAttributes and observe that 
NativeAzureFileSystemHelper.isFileNotFoundException is true, you can allow the 
code to resume from the point it would resume if exists() returns false.

If you still think the above is risky, the change you have looks good and fixes 
the issue with create().

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API for more reusability

2017-08-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-14773:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~elgoiri] for your contribution, I have committed this to 
trunk/branch-2.

> Extend ZKCuratorManager API for more reusability
> 
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API for more reusability

2017-08-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-14773:

Summary: Extend ZKCuratorManager API for more reusability  (was: Extend 
ZKCuratorManager API)

> Extend ZKCuratorManager API for more reusability
> 
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14583:
-
Attachment: (was: HADOOP-14583-001.patch)

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14583:
-
Attachment: HADOOP-14583-001.patch

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128044#comment-16128044
 ] 

Esfandiar Manii commented on HADOOP-14583:
--

This is not related to if parent directory does not exist. The issue is 
concurrency. If there are multiple threads trying to create and delete the same 
file over and over, you will be having a scenario where:
ThreadA -> Creates File
ThreadB -> Looks up the file and it exists
ThreadA -> Removes file
ThreadB -> Looks up for metadata before creation and throws exception

The logic to make the metadata lookup safe was missed from the create function 
where most of the other ones have this logic. I updated the code and added a 
test to ensure this wont happen under over 100 threads.

Please take a look over the patch and let me know if you have comments.


> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii reassigned HADOOP-14583:


Assignee: Esfandiar Manii

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128048#comment-16128048
 ] 

Esfandiar Manii commented on HADOOP-14583:
--


{code:java}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.299 sec - in 
org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorizationWithOwner
Tests run: 27, Failures: 0, Errors: 0, Skipped: 27, Time elapsed: 2.643 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorizationWithOwner
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.891 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList
Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 0.085 sec - in 
org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Running org.apache.hadoop.fs.azure.TestWasbFsck
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.704 sec - in 
org.apache.hadoop.fs.azure.TestWasbFsck
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.138 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Running org.apache.hadoop.fs.azure.TestContainerChecks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.349 sec - in 
org.apache.hadoop.fs.azure.TestContainerChecks
Running org.apache.hadoop.fs.azure.TestNativeAzureFSPageBlobLive
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 199.932 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFSPageBlobLive
Running org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.918 sec - in 
org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Running org.apache.hadoop.fs.azure.TestBlockBlobInputStream
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.134 sec - 
in org.apache.hadoop.fs.azure.TestBlockBlobInputStream
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.767 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemLive
Tests run: 51, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 209.062 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemLive
Running org.apache.hadoop.fs.azure.TestWasbUriAndConfiguration
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 7.902 sec - in 
org.apache.hadoop.fs.azure.TestWasbUriAndConfiguration
Running org.apache.hadoop.fs.azure.TestBlobDataValidation
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.181 sec - in 
org.apache.hadoop.fs.azure.TestBlobDataValidation
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.868 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Running org.apache.hadoop.fs.azure.TestBlobMetadata
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.742 sec - in 
org.apache.hadoop.fs.azure.TestBlobMetadata
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.375 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging
Running 
org.apache.hadoop.fs.azure.TestFileSystemOperationsExceptionHandlingMultiThreaded
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.124 sec - 
in 
org.apache.hadoop.fs.azure.TestFileSystemOperationsExceptionHandlingMultiThreaded
Running org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 1.514 sec - 
in org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractLive
Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 31.019 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractLive
Running org.apache.hadoop.fs.azure.contract.TestAzureNativeContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.53 sec - in 
org.apache.hadoop.fs.azure.contract.TestAzureNativeContractGetFileStatus
Running org.apache.hadoop.fs.azure.contract.TestAzureNativeContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, 

[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-15 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14583:
-
Attachment: HADOOP-14583-001.patch

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14583-001.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128020#comment-16128020
 ] 

Hadoop QA commented on HADOOP-14560:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 94 unchanged - 0 fixed = 98 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14560 |
| GITHUB PR | https://github.com/apache/hadoop/pull/242 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 493fbcebdcc5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13040/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13040/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13040/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13040/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: 

[jira] [Comment Edited] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127980#comment-16127980
 ] 

Íñigo Goiri edited comment on HADOOP-14773 at 8/15/17 11:02 PM:


{{TestRPC}} works fine in my machine and {{TestContainerAllocation}} failed in 
other builds.
I don't think any of them are related to this patch.


was (Author: elgoiri):
{{TestRPC}} works fine in my machine.
{{TestContainerAllocation}} has compilation issues and it fails for trunk in my 
machine.
Bottom line, I don't think any of them are related to this patch.

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-15 Thread Shane Mainali (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128006#comment-16128006
 ] 

Shane Mainali commented on HADOOP-14769:


+1. Thanks [~tmarquardt]!

> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127980#comment-16127980
 ] 

Íñigo Goiri commented on HADOOP-14773:
--

{{TestRPC}} works fine in my machine.
{{TestContainerAllocation}} has compilation issues and it fails for trunk in my 
machine.
Bottom line, I don't think any of them are related to this patch.

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127956#comment-16127956
 ] 

Hadoop QA commented on HADOOP-13998:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 59 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 57s{color} 
| {color:red} root generated 2 new + 1316 unchanged - 1 fixed = 1318 total (was 
1317) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 4 new + 205 unchanged 
- 4 fixed = 209 total (was 209) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 32s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| 

[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-15 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127925#comment-16127925
 ] 

Jordan Zimmerman commented on HADOOP-14741:
---

As the main author of Curator I'm happy to help if needed.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch, HADOOP-14741-branch-2-001.patch, 
> HADOOP-14741-branch-2-002.patch, HADOOP-14741-branch-2.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127921#comment-16127921
 ] 

Hadoop QA commented on HADOOP-14773:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} root: The patch generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881994/HADOOP-14773-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 10549a3eefed 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13037/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13037/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14560:

Attachment: HADOOP-14560.002.patch

Patch 002
* Rebase Alex' PR and attach here in order to run Yetus
* Fix a few potential checkstyle issues, e.g., line length exceeds 80 chars
* Rename the property to {{hadoop.http.socket.backlog.size}} to be consistent 
with the variable name
* Move the code around a little bit

+1 LGTM

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127902#comment-16127902
 ] 

Hadoop QA commented on HADOOP-14776:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14776 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882007/HADOOP-14776.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8217c212126c 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13039/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13039/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> clean up ITestS3AFileSystemContract
> ---
>
> Key: HADOOP-14776
> URL: https://issues.apache.org/jira/browse/HADOOP-14776
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14776.01.patch
>
>
> With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
> {{ITestS3AFileSystemContract}} which override existing methods just to skip 
> them can be cleaned up: The subclasses could throw assume() so their skippage 

[jira] [Updated] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Patch Available  (was: Open)

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Open  (was: Patch Available)

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14560:

Summary: Make HttpServer2 backlog size configurable  (was: Make HttpServer2 
accept queue size configurable)

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14776:

Status: Patch Available  (was: Open)

> clean up ITestS3AFileSystemContract
> ---
>
> Key: HADOOP-14776
> URL: https://issues.apache.org/jira/browse/HADOOP-14776
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14776.01.patch
>
>
> With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
> {{ITestS3AFileSystemContract}} which override existing methods just to skip 
> them can be cleaned up: The subclasses could throw assume() so their skippage 
> gets noted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14776:

Attachment: HADOOP-14776.01.patch

> clean up ITestS3AFileSystemContract
> ---
>
> Key: HADOOP-14776
> URL: https://issues.apache.org/jira/browse/HADOOP-14776
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14776.01.patch
>
>
> With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
> {{ITestS3AFileSystemContract}} which override existing methods just to skip 
> them can be cleaned up: The subclasses could throw assume() so their skippage 
> gets noted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-15 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127831#comment-16127831
 ] 

Esfandiar Manii commented on HADOOP-14769:
--

+1 with few comments:
AzureNativeFileSystemStore.java L2503-2505: Not sure how much we want to invest 
on this but there are many of this code everywhere, I wish there was only one 
method doing this.
NativeAzureFileSystem.java L2099-2108: instead of nested ifs please rewrite it 
to be like (for better code clarity):
if (!store.delete(path)) {
   return false;
}

if (isDir) {
}
else {
}

return true;
TestFileSystemOperationsWithThreads.java L592-594: nit: Please fix indentation

> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Summary: Merge initial S3guard release into trunk  (was: Merge initial 
s3guard release into trunk)

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) Review S3guard docs & code prior to merge

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Summary: Review S3guard docs & code prior to merge  (was: review s3guard 
docs & code prior to merge)

> Review S3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch, 
> HADOOP-14749-HADOOP-13345-004.patch, HADOOP-14749-HADOOP-13345-005.patch, 
> HADOOP-14749-HADOOP-13345-006.patch, HADOOP-14749-HADOOP-13345-007.patch, 
> HADOOP-14749-HADOOP-13345-008.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127802#comment-16127802
 ] 

Hadoop QA commented on HADOOP-14773:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881843/HADOOP-14773-000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d628ace0b2d3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13036/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13036/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13036/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in 

[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127797#comment-16127797
 ] 

Steve Loughran commented on HADOOP-14732:
-

Test RPC appears to be failing consistently on Jenkins now. Either this test 
has broken something, or the test is brittle to time measurements & so failing 
on VMs whose time is always a bit jittery.

Can you have a look?  Thanks
{code}
org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
Argument(s) are different! Wanted:
metricsRecordBuilder.addGauge(
Info with name=RpcQueueTime1s50thPercentileLatency,
geq(0)
);
-> at 
org.apache.hadoop.test.MetricsAsserts.assertQuantileGauges(MetricsAsserts.java:382)
Actual invocation has different arguments:
metricsRecordBuilder.addGauge(
MetricsInfoImpl{name=RpcQueueTime1s50thPercentileLatency, description=50 
percentile latency with 1 second interval for rpc queue time in milli second},
-324869768
);
-> at 
org.apache.hadoop.metrics2.lib.MutableQuantiles.snapshot(MutableQuantiles.java:124)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.test.MetricsAsserts.assertQuantileGauges(MetricsAsserts.java:382)
at org.apache.hadoop.ipc.TestRPC.testRpcMetrics(TestRPC.java:106
{code}

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14732:
-

> ProtobufRpcEngine should use Time.monotonicNow to measure durations
> ---
>
> Key: HADOOP-14732
> URL: https://issues.apache.org/jira/browse/HADOOP-14732
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14732.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Patch Available  (was: Open)

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Attachment: HADOOP-13998-004.patch

patch 004

As 003, except
* compiles against java 7
* provides better diags in tests when the local DDB server doesn't come up, by 
not losing exception text

This makes it a lot closer to a branch-2 patch, which is essentially this+ 
classpath fixup —existing work by [~liuml07].

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Open  (was: Patch Available)

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14773:
-
Attachment: HADOOP-14773-001.patch

Tweaked {{ZKRMStateStore}}. I'll leave {{CuratorService}} for another JIRA as 
it has some complexity.

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch, HADOOP-14773-001.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14775:

Component/s: build
 Issue Type: Improvement  (was: Task)

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>  Labels: junit5
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126640#comment-16126640
 ] 

Subru Krishnan edited comment on HADOOP-14773 at 8/15/17 6:30 PM:
--

Thanks [~elgoiri] for the patch. It looks fairly straightforward, one minor 
comment - can you update {{ZKRMStateStore/CuratorService}} etc to use the 
{{ZKCuratorManager}}.

Otherwise +1 (pending Yetus).




was (Author: subru):
Thanks [~elgoiri] for the patch. It looks fairly straightforward, +1 (pending 
Yetus).

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14773) Extend ZKCuratorManager API

2017-08-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated HADOOP-14773:

Status: Patch Available  (was: Open)

> Extend ZKCuratorManager API
> ---
>
> Key: HADOOP-14773
> URL: https://issues.apache.org/jira/browse/HADOOP-14773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14773-000.patch
>
>
> HDFS-10631 needs some minor changes in {{ZKCuratorManager}}:
> * {{boolean delete()}}
> * {{getData(path, stat)}}
> * {{createRootDirRecursively(path)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14769) WASB: delete recursive should not fail if a file is deleted

2017-08-15 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127651#comment-16127651
 ] 

Thomas Marquardt commented on HADOOP-14769:
---

This patch fixes issues encountered with HBASE where it calls WASB and WASB 
fails to delete a directory recursively.  

I think we agree that delete recursive should return true if 1) the directory 
exists and 2) the directory and its content is successfully deleted.  It should 
return true when those conditions are met, even if one of the child entries did 
not exist when an attempt was made to delete it.

> WASB: delete recursive should not fail if a file is deleted
> ---
>
> Key: HADOOP-14769
> URL: https://issues.apache.org/jira/browse/HADOOP-14769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14769-001.patch
>
>
> FileSystem.delete(Path path) and delete(Path path, boolean recursive) return 
> false if the path does not exist.  The WASB implementation of recursive 
> delete currently fails if one of the entries is deleted by an external agent 
> while a recursive delete is in progress.  For example, if you try to delete 
> all of the files in a directory, which can be a very long process, and one of 
> the files contained within is deleted by an external agent, the recursive 
> directory delete operation will fail if it tries to delete that file and 
> discovers that it does not exist.  This is not desirable.  A recursive 
> directory delete operation should succeeed if the directory initially exists 
> and when the operation completes, the directory and all of its entries do not 
> exist.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127644#comment-16127644
 ] 

Hadoop QA commented on HADOOP-14660:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
6s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} root: The patch generated 2 new + 3 unchanged - 
22 fixed = 5 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-azure in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14660 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881958/HADOOP-14660-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |

[jira] [Commented] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127597#comment-16127597
 ] 

Hadoop QA commented on HADOOP-13998:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 59 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m  2s{color} 
| {color:red} root generated 2 new + 1316 unchanged - 1 fixed = 1318 total (was 
1317) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 30s{color} | {color:orange} root: The patch generated 4 new + 205 unchanged 
- 4 fixed = 209 total (was 209) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
10s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| 

[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-15 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127599#comment-16127599
 ] 

Aaron Fabbri commented on HADOOP-14738:
---

I'm happy to put up a patch for this once we have consensus.

{quote}
I could see an argument for removal without a deprecation cycle if S3N is a 
high maintenance burden, we have high confidence that no one uses it, and the 
same data remains accessible via S3A.
{quote}
I think #1 and #3 are true, but don't have confidence that no one uses it.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14652) Update metrics-core version

2017-08-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127587#comment-16127587
 ] 

Ray Chiang commented on HADOOP-14652:
-

Note that 3.2.4 of the library has been released.

> Update metrics-core version
> ---
>
> Key: HADOOP-14652
> URL: https://issues.apache.org/jira/browse/HADOOP-14652
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14652.001.patch
>
>
> The current artifact is:
> com.codehale.metrics:metrics-core:3.0.1
> That version could either be bumped to 3.0.2 (the latest of that line), or 
> use the latest artifact:
> io.dropwizard.metrics:metrics-core:3.2.3



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14776:
---

Assignee: Ajay Kumar

> clean up ITestS3AFileSystemContract
> ---
>
> Key: HADOOP-14776
> URL: https://issues.apache.org/jira/browse/HADOOP-14776
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
>
> With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
> {{ITestS3AFileSystemContract}} which override existing methods just to skip 
> them can be cleaned up: The subclasses could throw assume() so their skippage 
> gets noted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14776) clean up ITestS3AFileSystemContract

2017-08-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14776:
---

 Summary: clean up ITestS3AFileSystemContract
 Key: HADOOP-14776
 URL: https://issues.apache.org/jira/browse/HADOOP-14776
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


With the move of {{FileSystemContractTest}} test to JUnit4, the bits of 
{{ITestS3AFileSystemContract}} which override existing methods just to skip 
them can be cleaned up: The subclasses could throw assume() so their skippage 
gets noted.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127472#comment-16127472
 ] 

Andrew Wang commented on HADOOP-14738:
--

Thanks Steve. A strict interpretation of the compat guidelines would say 
deprecate in 3.0 and remove in 4.0. Is this what we're planning?

I could see an argument for removal without a deprecation cycle if S3N is a 
high maintenance burden, we have high confidence that no one uses it, and the 
same data remains accessible via S3A.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2017-08-15 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-14775:
---

 Summary: Change junit dependency in parent pom file to junit 5 
while maintaining backward compatibility to junit4. 
 Key: HADOOP-14775
 URL: https://issues.apache.org/jira/browse/HADOOP-14775
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0-alpha4
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Change junit dependency in parent pom file to junit 5 while maintaining 
backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14774:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13204

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "ETag: "

[jira] [Updated] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-15 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14660:
--
Attachment: HADOOP-14660-branch-2-001.patch

Re-attaching HADOOP-14660-branch-2-001.patch for another QA pass now that 
dependency HADOOP-14662 is committed.

All hadoop-azure tests passed against my tmarql3 endpoint.

Tests run: 736, Failures: 0, Errors: 0, Skipped: 95

> wasb: improve throughput by 34% when account limit exceeded
> ---
>
> Key: HADOOP-14660
> URL: https://issues.apache.org/jira/browse/HADOOP-14660
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14660-001.patch, HADOOP-14660-002.patch, 
> HADOOP-14660-003.patch, HADOOP-14660-004.patch, HADOOP-14660-005.patch, 
> HADOOP-14660-006.patch, HADOOP-14660-007.patch, HADOOP-14660-008.patch, 
> HADOOP-14660-010.patch, HADOOP-14660-branch-2-001.patch
>
>
> Big data workloads frequently exceed the Azure Storage max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits).  
> For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps.  When the limit is exceeded, the Azure Storage service 
> fails a percentage of incoming requests, and this causes the client to 
> initiate the retry policy.  The retry policy delays requests by sleeping, but 
> the sleep duration is independent of the client throughput and account limit. 
>  This results in low throughput, due to the high number of failed requests 
> and thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput.  Tests have shown that this improves 
> throughtput by ~34% when the storage account max ingress and/or egress limits 
> are exceeded. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-15 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14660:
--
Attachment: (was: HADOOP-14660-branch-2.patch)

> wasb: improve throughput by 34% when account limit exceeded
> ---
>
> Key: HADOOP-14660
> URL: https://issues.apache.org/jira/browse/HADOOP-14660
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14660-001.patch, HADOOP-14660-002.patch, 
> HADOOP-14660-003.patch, HADOOP-14660-004.patch, HADOOP-14660-005.patch, 
> HADOOP-14660-006.patch, HADOOP-14660-007.patch, HADOOP-14660-008.patch, 
> HADOOP-14660-010.patch
>
>
> Big data workloads frequently exceed the Azure Storage max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits).  
> For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps.  When the limit is exceeded, the Azure Storage service 
> fails a percentage of incoming requests, and this causes the client to 
> initiate the retry policy.  The retry policy delays requests by sleeping, but 
> the sleep duration is independent of the client throughput and account limit. 
>  This results in low throughput, due to the high number of failed requests 
> and thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput.  Tests have shown that this improves 
> throughtput by ~34% when the storage account max ingress and/or egress limits 
> are exceeded. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Attachment: HADOOP-13998-003.patch

HADOOP-13998 patch 003
* checkstyle issues
* a bit more IDE cleanup (inc /** -> /* in top comment)
* TestPathMetadataDynamoDBTranslation -> callable from lambda
* TestS3GuardConcurrentOps made java 7 friendly

My IDE was confused and thought it was Java 7, which helped find a couple of 
java 8 bits of the test. Fixed them for ease of backporting this to 2.9.

Identified a couple of issues we should look at/clarify

# {{LocalMetadataStore.prune()}} is modifying the iterator of the dirhash 
during the iteration using put(). Is it safe to do that?. It may be better to 
build the list of entries to add, and do it after that initial iteration

# DynamoDbClientFactory should be able to pick up StringUtils.join, either the 
hadoop one or one of the commons-lang ones.


> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Patch Available  (was: Open)

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Status: Open  (was: Patch Available)

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14771) hadoop-client does not include hadoop-yarn-client

2017-08-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated HADOOP-14771:

Summary: hadoop-client does not include hadoop-yarn-client  (was: 
hadoop-common does not include hadoop-yarn-client)

> hadoop-client does not include hadoop-yarn-client
> -
>
> Key: HADOOP-14771
> URL: https://issues.apache.org/jira/browse/HADOOP-14771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Haibo Chen
>Priority: Critical
>
> The hadoop-client does not include hadoop-yarn-client, thus, the shared 
> hadoop-client is incomplete. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127386#comment-16127386
 ] 

Steve Loughran commented on HADOOP-14662:
-

+1
committed to branch-2. thanks

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2-001.patch, 
> HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14662:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2-001.patch, 
> HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-15 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127218#comment-16127218
 ] 

Yonger commented on HADOOP-14770:
-

Sorry, not yet. I am working with multiple partners on our big data cluster, so 
it's not easy to move to 2.8.  But I will complete it ASAP.

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127075#comment-16127075
 ] 

Wei-Chiu Chuang edited comment on HADOOP-14705 at 8/15/17 10:54 AM:


Hi [~xiaochen] thanks a lot for taking up this work. I reviewed the patch and 
have a few comments listed below:
{code:title=KMSClientProvider#reencryptEncryptedKeys}
final List jsonPayload = new ArrayList();
{code}
should be final List jsonPayload = new ArrayList();

{code}
if (keyName == null) {
  checkNotNull(ekv.getEncryptionKeyName(), 
"ekv.getEncryptionKeyName");
{code}
The check is redundant
{code}
jsonPayload.add(KMSUtil.toJSON(ekv, ekv.getEncryptionKeyName()));
{code}
if all ekv are the same, wouldn’t it be more efficient to optimize it somehow?
{code}
final List response =
call(conn, jsonPayload, 
HttpURLConnection.HTTP_OK, List.class);
{code}
should the last parameter be Map.class?

Question: is there a practical size limit for a KMS request?


{code:title=TestKMS}
fail("Should not be able to reencryptEncryptedKeys");
{code}
—> grammatical error: Should not have been

{code:title=KMS}
kmsAudit.ok(user, KMSOp.REENCRYPT_EEK_BATCH, name, "");
{code}
I wonder if it makes sense to log the size of the batch in extraMsg.

{code:title=KMS}
if (LOG.isDebugEnabled()) {
  LOG.debug("reencryptEncryptedKeys {} keys for key 
{} took {}",
  jsonPayload.size(), name, sw.stop());
}
{code}
It looks like a bad practice (for fear of resource leakage) to me that the 
StopWatch is only stopped if debug log is enabled. 
Also, does it return time in milliseconds? Can you add the time unit into log 
message as well?

This is irrelevant to this patch, but there are a number of places in KMS where 
Object references are used unnecessarily:
{code}
Object retJSON;
…
retJSON = new ArrayList();
for (EncryptedKeyVersion edek : retEdeks) {
  
((ArrayList) retJSON).add(KMSUtil.toJSON(edek));
}

{code}



was (Author: jojochuang):
Hi [~xiaochen] thanks a lot for taking up this work. I reviewed the patch and 
have a few comments listed below:
{code:title=KMSClientProvider#reencryptEncryptedKeys}
final List jsonPayload = new ArrayList();
{code}
should be final List jsonPayload = new ArrayList();

{code}
if (keyName == null) {
  checkNotNull(ekv.getEncryptionKeyName(), 
"ekv.getEncryptionKeyName");
{code}
The check is redundant
{code}
jsonPayload.add(KMSUtil.toJSON(ekv, ekv.getEncryptionKeyName()));
{code}
if all env are the same, wouldn’t it be more efficient to optimize it somehow?
{code}
final List response =
call(conn, jsonPayload, 
HttpURLConnection.HTTP_OK, List.class);
{code}
the type List should be Map.class

Question: is there a practical size limit for a KMS request?


{code:title=TestKMS}
fail("Should not be able to reencryptEncryptedKeys");
{code}
—> grammatical error: Should not have been

{code:title=KMS}
kmsAudit.ok(user, KMSOp.REENCRYPT_EEK_BATCH, name, "");
{code}
I wonder if it makes sense to log the size of the batch in extraMsg.

{code:title=KMS}
if (LOG.isDebugEnabled()) {
  LOG.debug("reencryptEncryptedKeys {} keys for key 
{} took {}",
  jsonPayload.size(), name, sw.stop());
}
{code}
It looks like a bad practice (for fear of resource leakage) to me that the 
StopWatch is only stopped if debug log is enabled. 
Also, does it return time in milliseconds? Can you add the time unit into log 
message as well?

This is irrelevant to this patch, but there are a number of places in KMS where 
Object references are used unnecessarily:
{code}
Object retJSON;
…
retJSON = new ArrayList();
for (EncryptedKeyVersion edek : retEdeks) {
  
((ArrayList) retJSON).add(KMSUtil.toJSON(edek));
}

{code}


> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127075#comment-16127075
 ] 

Wei-Chiu Chuang commented on HADOOP-14705:
--

Hi [~xiaochen] thanks a lot for taking up this work. I reviewed the patch and 
have a few comments listed below:
{code:title=KMSClientProvider#reencryptEncryptedKeys}
final List jsonPayload = new ArrayList();
{code}
should be final List jsonPayload = new ArrayList();

{code}
if (keyName == null) {
  checkNotNull(ekv.getEncryptionKeyName(), 
"ekv.getEncryptionKeyName");
{code}
The check is redundant
{code}
jsonPayload.add(KMSUtil.toJSON(ekv, ekv.getEncryptionKeyName()));
{code}
if all env are the same, wouldn’t it be more efficient to optimize it somehow?
{code}
final List response =
call(conn, jsonPayload, 
HttpURLConnection.HTTP_OK, List.class);
{code}
the type List should be Map.class

Question: is there a practical size limit for a KMS request?


{code:title=TestKMS}
fail("Should not be able to reencryptEncryptedKeys");
{code}
—> grammatical error: Should not have been

{code:title=KMS}
kmsAudit.ok(user, KMSOp.REENCRYPT_EEK_BATCH, name, "");
{code}
I wonder if it makes sense to log the size of the batch in extraMsg.

{code:title=KMS}
if (LOG.isDebugEnabled()) {
  LOG.debug("reencryptEncryptedKeys {} keys for key 
{} took {}",
  jsonPayload.size(), name, sw.stop());
}
{code}
It looks like a bad practice (for fear of resource leakage) to me that the 
StopWatch is only stopped if debug log is enabled. 
Also, does it return time in milliseconds? Can you add the time unit into log 
message as well?

This is irrelevant to this patch, but there are a number of places in KMS where 
Object references are used unnecessarily:
{code}
Object retJSON;
…
retJSON = new ArrayList();
for (EncryptedKeyVersion edek : retEdeks) {
  
((ArrayList) retJSON).add(KMSUtil.toJSON(edek));
}

{code}


> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127069#comment-16127069
 ] 

Hadoop QA commented on HADOOP-14662:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
7s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14662 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881909/HADOOP-14662-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 4681657da38e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 7b22df3 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13033/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13033/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update azure-storage sdk 

[jira] [Commented] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127058#comment-16127058
 ] 

Steve Loughran commented on HADOOP-14770:
-

Does moving to 2.8 fix this? If so, close as a duplicate of HADOOP-13202, thanks

> S3A http connection in s3a driver not reuse in Spark application
> 
>
> Key: HADOOP-14770
> URL: https://issues.apache.org/jira/browse/HADOOP-14770
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
>
> I print out connection stats every 2 s when running Spark application against 
> s3-compatible storage:
> {code}
> ESTAB  0  0 :::10.0.2.36:6
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44454
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  159724 0 :::10.0.2.36:44436
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:8
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44338
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44438
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44414
> :::10.0.2.254:80 
> ESTAB  0  480   :::10.0.2.36:44450
> :::10.0.2.254:80  timer:(on,170ms,0)
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44390
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44326
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44452
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44394
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:4
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44456
> :::10.0.2.254:80 
> ==
> ESTAB  0  0 :::10.0.2.36:44508
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44476
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44524
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44374
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44500
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44504
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44512
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44506
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44464
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44518
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44510
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:2
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44526
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44472
> :::10.0.2.254:80 
> ESTAB  0  0 :::10.0.2.36:44466
> :::10.0.2.254:80 
> {code}
> the connection in the above of "=" and below were changed all the time. But 
> this haven't seen in MR application. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14660) wasb: improve throughput by 34% when account limit exceeded

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127055#comment-16127055
 ] 

Steve Loughran commented on HADOOP-14660:
-

patch failed before SDK update; once that's in we can resubmit this

> wasb: improve throughput by 34% when account limit exceeded
> ---
>
> Key: HADOOP-14660
> URL: https://issues.apache.org/jira/browse/HADOOP-14660
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14660-001.patch, HADOOP-14660-002.patch, 
> HADOOP-14660-003.patch, HADOOP-14660-004.patch, HADOOP-14660-005.patch, 
> HADOOP-14660-006.patch, HADOOP-14660-007.patch, HADOOP-14660-008.patch, 
> HADOOP-14660-010.patch, HADOOP-14660-branch-2.patch
>
>
> Big data workloads frequently exceed the Azure Storage max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits).  
> For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps.  When the limit is exceeded, the Azure Storage service 
> fails a percentage of incoming requests, and this causes the client to 
> initiate the retry policy.  The retry policy delays requests by sleeping, but 
> the sleep duration is independent of the client throughput and account limit. 
>  This results in low throughput, due to the high number of failed requests 
> and thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput.  Tests have shown that this improves 
> throughtput by ~34% when the storage account max ingress and/or egress limits 
> are exceeded. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14662:

Status: Patch Available  (was: Reopened)

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2-001.patch, 
> HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14662:

Attachment: HADOOP-14662-branch-2-001.patch

reattach with name to keep yetus happy & submitting

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2-001.patch, 
> HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14662) Update azure-storage sdk to version 5.4.0

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14662:
-

> Update azure-storage sdk to version 5.4.0
> -
>
> Key: HADOOP-14662
> URL: https://issues.apache.org/jira/browse/HADOOP-14662
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14662-001.patch, HADOOP-14662-branch-2.patch
>
>
> Azure Storage SDK implements a new event (ErrorReceivingResponseEvent) which 
> HADOOP-14660 has a dependency on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127049#comment-16127049
 ] 

Steve Loughran commented on HADOOP-14738:
-

FWIW. I don't think removal is blocker. Marking as deprecated, yes, and 
straightforward

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127043#comment-16127043
 ] 

Steve Loughran commented on HADOOP-14774:
-

ooh, interesting.

It's actually intermittent against all other object stores, implying they 
handle the semantics of read-beyond-range differently

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08

[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14774:

   Priority: Minor  (was: Major)
Description: 
{code:java}
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
  Time elapsed: 2.605 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<8193>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
{code}


>From log, the length of content is exceed than our expect:

{code:java}
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
/test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> 
x-amz-content-sha256: 
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
AWS4-HMAC-SHA256 
Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
 Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
20170815T085316Z
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
application/x-www-form-urlencoded; charset=utf-8
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
Keep-Alive
2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
GMT[\r][\n]"
2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "x-amz-request-id: 
tx0001e-005992b67e-27a45-default[\r][\n]"
2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Type: application/octet-stream[\r][\n]"
2017-08-15 16:53:16,482 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Date: Tue, 15 Aug 2017 08:53:18 GMT[\r][\n]"
2017-08-15 16:53:16,483 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "[\r][\n]"
{code}

 

  was:

{code:java}
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<

[jira] [Commented] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127041#comment-16127041
 ] 

Steve Loughran commented on HADOOP-13998:
-

# I want to do a checkstyle fix
# we do need to follow the full vote for a branch merge; I believe I can be one 
of the voters.

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127039#comment-16127039
 ] 

Steve Loughran commented on HADOOP-14738:
-

depends what we think is needed. I'm thinking of: rm docs, test, existing code. 
Add new section "migrating to s3a" (wiki?) and have the s3n FS impl print this 
to tell people what to do. 

We can't do an automated migration as the key settings are all different

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Blocker
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126990#comment-16126990
 ] 

Yonger commented on HADOOP-14774:
-


{code:java}
  GetObjectRequest request = new GetObjectRequest(bucket, key)
  .withRange(targetPos, contentRangeFinish);
{code}
we should pass contentRangeFinish-1 instead of contentRangeFinish into 
withRange method

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> G

[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14774:

Description: 

{code:java}
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
  Time elapsed: 2.605 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<8193>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
{code}


>From log, the length of content is exceed than our expect:

{code:java}
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
/test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> 
x-amz-content-sha256: 
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
AWS4-HMAC-SHA256 
Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
 Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
20170815T085316Z
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
application/x-www-form-urlencoded; charset=utf-8
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
Keep-Alive
2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
GMT[\r][\n]"
2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "x-amz-request-id: 
tx0001e-005992b67e-27a45-default[\r][\n]"
2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Type: application/octet-stream[\r][\n]"
2017-08-15 16:53:16,482 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Date: Tue, 15 Aug 2017 08:53:18 GMT[\r][\n]"
2017-08-15 16:53:16,483 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "[\r][\n]"
{code}

 

  was:
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.sca

[jira] [Created] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Yonger (JIRA)
Yonger created HADOOP-14774:
---

 Summary: S3A case "testRandomReadOverBuffer" failed due to 
improper range parameter
 Key: HADOOP-14774
 URL: https://issues.apache.org/jira/browse/HADOOP-14774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
 Environment: Hadoop 2.8.0  
s3-compatible storage 
Reporter: Yonger


Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
  Time elapsed: 2.605 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<8193>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)

>From log, the length of content is exceed than our expect:
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
/test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> 
x-amz-content-sha256: 
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
AWS4-HMAC-SHA256 
Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
 Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
20170815T085316Z
2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
application/x-www-form-urlencoded; charset=utf-8
2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
(DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
Keep-Alive
2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
GMT[\r][\n]"
2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "x-amz-request-id: 
tx0001e-005992b67e-27a45-default[\r][\n]"
2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Content-Type: application/octet-stream[\r][\n]"
2017-08-15 16:53:16,482 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "Date: Tue, 15 Aug 2017 08:53:18 GMT[\r][\n]"
2017-08-15 16:53:16,483 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
(Wire.java:wire(72)) -  << "

[jira] [Assigned] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-08-15 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger reassigned HADOOP-14774:
---

Assignee: Yonger

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> From log, the length of content is exceed than our expect:
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
> 2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
>