[jira] [Commented] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426516#comment-15426516
 ] 

Hudson commented on HADOOP-13405:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10299 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10299/])
HADOOP-13405 doc for fs.s3a.acl.default indicates incorrect values. (stevel: 
rev 040c185d624a18627d23cedb12bf91a950ada2fc)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch, HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426948#comment-15426948
 ] 

Steve Loughran commented on HADOOP-13252:
-

+applied to branch 2 (3-way merge) and tested there. Everything passed

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-006.patch, HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427176#comment-15427176
 ] 

Hadoop QA commented on HADOOP-13487:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824433/HADOOP-13487.02.patch 
|
| JIRA Issue | HADOOP-13487 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4b62a9047967 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ae4db25 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10302/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10302/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
> -
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper

[jira] [Commented] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427240#comment-15427240
 ] 

Hadoop QA commented on HADOOP-13428:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1242 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  1m 
14s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
28s{color} | {color:green} hadoop-common-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824435/HADOOP-13428.3.patch |
| JIRA Issue | HADOOP-13428 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  |
| uname | Linux 1b3b8b323402 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ae4db25 |
| Default Java | 1.8.0_101 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10303/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10303/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10303/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10303/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 

[jira] [Commented] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Shen Yinjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427428#comment-15427428
 ] 

Shen Yinjie commented on HADOOP-13405:
--

Thanks,[~ste...@apache.org] and [~cnauroth],I am very appreciated! By the way,I 
have watched your wonderful videos on Hadoop Submit in San Jose.:-p

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch, HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS

2016-08-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427328#comment-15427328
 ] 

Andrew Wang commented on HADOOP-13396:
--

Hi Xiao, thanks for working on this. Had some review comments:

* Is KMSAuditLogger interface really InterfaceAudience.Public? i.e. do we allow 
users to code against this interface by providing their own audit logger 
implementations? I'm guessing the answer is no, since we do not use reflection 
to instantiate the logger. We might still consider using the classname as 
configuration though, in case we want to add support for user-provided loggers 
later. In this case, I'd recommend splitting out each audit logger 
implementation as a separate class.
* Could we add a big warning about the importance of audit logger output 
compatibility to KMSAuditLogger's class javadoc? We could use similar reminders 
in the logger implementations. One difference is that since we output a JSON 
dictionary, there are no guarantees about the ordering of the KV pairs.
* kms-site.xml description should say how this takes a comma-separated list. 
Are multiple audit loggers unit tested? What happens if the same value (e.g. 
"simple") is specified multiple times?
* KMSAudit#error, we added logging the URL, but used a different capitalization 
than {{#unauthenticated}}. It's also somewhat inconsistent that we put URL 
after the Exception while {{#unauthenticated}} puts it before the ErrorMsg. Not 
sure if we can change this one though. On the whole I don't think we should be 
making this possibly incompatible change in this JIRA, could you split it out 
so we can discuss separately?
* The multiple uses of {{System.currentTimeMillis()}} is suspect. It means with 
multiple audit loggers, they could have different times. A similar issue can 
also happen within a single call to the JSON logger's logAuditEvent.
* I think it's confusing how there's a {{logAuditSimpleFormat}} in the JSON 
logger. The term is now overloaded since we use "simple" to configure the 
current audit logger. So, we should rename one or the other.
* Could we make an effort to use the same key names as the current audit 
logger? e.g. "op" instead of "operation", "user" instead of "username". This 
will make life easier for consumers.
* Could you provide a small snippet of what the JSON output and textual output 
looks like for the same events? Hopefully we can get a quick gut check from 
[~aw].

> Add json format audit logging to KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.
> A json format log extension is added using the refactored API, and being 
> turned off by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13512:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.4
   Status: Resolved  (was: Patch Available)

Thanks [~jnp] for reviewing the patch. It contains no dedicated UT because it's 
covered by existing {{TestReloadingX509TrustManager}}. I've committed this to 
{{trunk}} through {{branch-2.7}}.

> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
> Fix For: 2.7.4
>
> Attachments: HADOOP-13512.000.patch
>
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427207#comment-15427207
 ] 

Hudson commented on HADOOP-13512:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10302 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10302/])
HADOOP-13512. ReloadingX509TrustManager should keep reloading in case of 
(liuml07: rev 0f51eae0c085ded38216824377acf8122638c3a5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java


> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
> Fix For: 2.7.4
>
> Attachments: HADOOP-13512.000.patch
>
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7945) Document that Path objects do not support ":" in them.

2016-08-18 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-7945:

Target Version/s:   (was: 2.8.0)

No update on this very old documentation patch. Removing target-versions. Let's 
add it back to the next release when a patch becomes ready.

> Document that Path objects do not support ":" in them.
> --
>
> Key: HADOOP-7945
> URL: https://issues.apache.org/jira/browse/HADOOP-7945
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.20.0
>Reporter: Harsh J
>Priority: Critical
>  Labels: newbie
> Attachments: HADOOP-7945.patch
>
>
> Until HADOOP-3257 is fixed, this particular exclusion should be documented. 
> This is a major upsetter to many beginners.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13503) Improve SaslRpcClient failure logging

2016-08-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-13503:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk, branch-2 and branch-2.8. Thanks [~xiaobingo] 
for the contribution. Thanks [~arpitagarwal] and [~ste...@apache.org] for the 
review.

> Improve SaslRpcClient failure logging
> -
>
> Key: HADOOP-13503
> URL: https://issues.apache.org/jira/browse/HADOOP-13503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-13503.000.patch, HADOOP-13503.001.patch, 
> HADOOP-13503.002.patch
>
>
> In SaslRpcClient#getServerPrincipal, it only printed out server advertised 
> principal. The actual principal we expect from configuration is quite useful 
> while debugging security related issues. It should also be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427366#comment-15427366
 ] 

Hadoop QA commented on HADOOP-13518:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 24s{color} | {color:orange} root: The patch generated 10 new + 49 unchanged 
- 2 fixed = 59 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824449/HADOOP-13518-branch-2-001.patch
 |
| 

[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427225#comment-15427225
 ] 

Steve Loughran commented on HADOOP-13518:
-

note that this is a cherry pick and I'd like the code to match trunk as much as 
possible, so I haven't made changes I would otherwise do (to the logging in 
particular). the only diff between this and trunk is I added "."s at the end of 
the javadoc first sentences, so as to keep java8 happy. That can be forward 
ported to trunk to make sure they are both consistent.

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-08-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427318#comment-15427318
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-13428:
--

Glad to see progress on this. This approach looks good to me to work around the 
jdiff bug.

Can we use 2.7.2 as the base stable version instead of 2.7.3 as 2.7.3 is still 
under release?

Also fix the whitespace issues before uploading the next patch?

> Fix hadoop-common to generate jdiff
> ---
>
> Key: HADOOP-13428
> URL: https://issues.apache.org/jira/browse/HADOOP-13428
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: HADOOP-13428.1.patch, HADOOP-13428.2.patch, 
> HADOOP-13428.3.patch, metric-system-temp-fix.patch
>
>
> Hadoop-common failed to generate JDiff. We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13503) Improve SaslRpcClient failure logging

2016-08-18 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427235#comment-15427235
 ] 

Jing Zhao commented on HADOOP-13503:


+1 as well. Will commit the patch shortly.

> Improve SaslRpcClient failure logging
> -
>
> Key: HADOOP-13503
> URL: https://issues.apache.org/jira/browse/HADOOP-13503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13503.000.patch, HADOOP-13503.001.patch, 
> HADOOP-13503.002.patch
>
>
> In SaslRpcClient#getServerPrincipal, it only printed out server advertised 
> principal. The actual principal we expect from configuration is quite useful 
> while debugging security related issues. It should also be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13503) Improve SaslRpcClient failure logging

2016-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427274#comment-15427274
 ] 

Hudson commented on HADOOP-13503:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10303 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10303/])
HADOOP-13503. Improve SaslRpcClient failure logging. Contributed by (jing9: rev 
c5c3e81b49ae6ef0cf9022f90f3709166aa4488d)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java


> Improve SaslRpcClient failure logging
> -
>
> Key: HADOOP-13503
> URL: https://issues.apache.org/jira/browse/HADOOP-13503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-13503.000.patch, HADOOP-13503.001.patch, 
> HADOOP-13503.002.patch
>
>
> In SaslRpcClient#getServerPrincipal, it only printed out server advertised 
> principal. The actual principal we expect from configuration is quite useful 
> while debugging security related issues. It should also be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010

2016-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427562#comment-15427562
 ] 

Hudson commented on HADOOP-13504:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10304 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10304/])
HADOOP-13504. Refactor jni_common to conform to C89 restrictions imposed 
(kai.zheng: rev dbcaf999d9ea7a7c6c090903d1982e5b61200c8b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/jni_common.c


> Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 
> 2010
> 
>
> Key: HADOOP-13504
> URL: https://issues.apache.org/jira/browse/HADOOP-13504
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: SammiChen
>Assignee: SammiChen
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13504-v1.patch
>
>
> Piece of code in jni_common declares variables after the first statement in 
> function. This behavior is not allowed in compilers, such as Visual Studio 
> 2010, which only supports C89 C standards. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-08-18 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13428:

Attachment: HADOOP-13428.4.patch

Attached ver.4 patch.

> Fix hadoop-common to generate jdiff
> ---
>
> Key: HADOOP-13428
> URL: https://issues.apache.org/jira/browse/HADOOP-13428
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: HADOOP-13428.1.patch, HADOOP-13428.2.patch, 
> HADOOP-13428.3.patch, HADOOP-13428.4.patch, metric-system-temp-fix.patch
>
>
> Hadoop-common failed to generate JDiff. We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13504) Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 2010

2016-08-18 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13504:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed to both 3.0.0-alpha1 and trunk branches. Thanks [~Sammi] for the 
contribution!

> Refactor jni_common to conform to C89 restrictions imposed by Visual Studio 
> 2010
> 
>
> Key: HADOOP-13504
> URL: https://issues.apache.org/jira/browse/HADOOP-13504
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: SammiChen
>Assignee: SammiChen
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13504-v1.patch
>
>
> Piece of code in jni_common declares variables after the first statement in 
> function. This behavior is not allowed in compilers, such as Visual Studio 
> 2010, which only supports C89 C standards. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427559#comment-15427559
 ] 

Hadoop QA commented on HADOOP-13428:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1228 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  1m 
12s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824491/HADOOP-13428.4.patch |
| JIRA Issue | HADOOP-13428 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  |
| uname | Linux b4073cf4c95d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5c3e81 |
| Default Java | 1.8.0_101 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10305/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10305/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10305/testReport/ |
| modules | C: hadoop-project-dist hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10305/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix 

[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS

2016-08-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427642#comment-15427642
 ] 

Xiao Chen commented on HADOOP-13396:


-As you can see, the text format audit log really has minimum information  
(shrug), but I understand compat is compat.-

> Add json format audit logging to KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.
> A json format log extension is added using the refactored API, and being 
> turned off by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS

2016-08-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427641#comment-15427641
 ] 

Xiao Chen commented on HADOOP-13396:


Thanks Wei-Chiu and Andrew for the great reviews!!
I will need more time to come back with the comments, but here's a sample 
output, pending to roll back the URL changes. (Getting from running the 
{{testAuditLogFormat}} tests from both, which is exactly the same as what it 
would show in actual audit log files.)

Text:
{noformat}
OK[op=GENERATE_EEK, key=k4, user=luser, accessCount=1, interval=1ms] testmsg
OK[op=GENERATE_EEK, user=luser] testmsg
OK[op=GENERATE_EEK, key=k4, user=luser, accessCount=1, interval=5ms] testmsg
UNAUTHORIZED[op=DECRYPT_EEK, key=k4, user=luser] 
ERROR[user=luser] Method:'method' Exception:'testmsg' url:'url'
UNAUTHENTICATED RemoteHost:remotehost Method:method URL:url ErrorMsg:'testmsg'
{noformat}

Json:
{noformat}
{"username":"luser","impersonator":"null!","ipAddress":"Unknown","operation":"GENERATE_EEK","eventTime":1471583567510,"allowed":true,"result":"OK","accessCount":"1","extraMessage":"testmsg","interval":"2","key":"k4"}
 
{"username":"luser","impersonator":"null!","ipAddress":"Unknown","operation":"GENERATE_EEK","eventTime":1471583567538,"allowed":true,"result":"OK","extraMessage":"testmsg"}
 
{"username":"luser","impersonator":"null!","ipAddress":"Unknown","operation":"GENERATE_EEK","eventTime":1471583568543,"allowed":true,"result":"OK","accessCount":"1","extraMessage":"testmsg","interval":"1035","key":"k4"}
 
{"username":"luser","impersonator":"null!","ipAddress":"Unknown","operation":"DECRYPT_EEK","eventTime":1471583568544,"allowed":false,"result":"UNAUTHORIZED","extraMessage":"","key":"k4"}
 
{"username":"luser","impersonator":"null!","ipAddress":"Unknown","operation":"Unknown","eventTime":1471583568544,"allowed":false,"result":"ERROR","extraMessage":"Method:'method'
 Exception:'testmsg' url:'url'"}
 
{"username":"null!","impersonator":"null!","ipAddress":"remotehost","operation":"Unknown","eventTime":1471583568545,"allowed":false,"result":"UNAUTHENTICATED","extraMessage":"RemoteHost:remotehost
 Method:method URL:url ErrorMsg:'testmsg'"}
{noformat}

> Add json format audit logging to KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.
> A json format log extension is added using the refactored API, and being 
> turned off by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13518:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13518:

Attachment: HADOOP-13518-branch-2-001.patch

Patch 001

cherry pick in changes to s3 Native rename() and all the extra tests in 
FileSystemBaseContract test.

Tested against: s3 ireland, plus also azure, hdfs, webhdfs and swift contract 
tests.

> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13518-branch-2-001.patch
>
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426831#comment-15426831
 ] 

Anu Engineer commented on HADOOP-7363:
--

[~boky01] Thanks for updating the patch. But I see that we still catch the 
UnsupportedOperationException and then continue the test. That seems little 
strange to me. We do log a WARN though. I am sure I am missing something here, 
could you please take a moment and explain to me why the test is doing this ? 



> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13510) "hadoop fs -getmerge" docs, .../dir does not work, .../dir/* works.

2016-08-18 Thread David Sidlo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426869#comment-15426869
 ] 

David Sidlo commented on HADOOP-13510:
--

The issue may be based on the total dataset size.

The following command does not work. But, will work with the addition of "/*". 
The resulting file size would be 4G and 17 files get merged (when it works).
> hdfs dfs -getmerge hdfs://production/apps/hive/warehouse/dgs_tmp.db xxx

 1013  hdfs dfs -getmerge hdfs://production/apps/hive/warehouse/dgs_tmp.db/* xxx
 1019  hdfs dfs -getmerge hdfs://production/user/ds_dsidlo xxx
 1028* hdfs dfs -getmerge hdfs://production/tmp/ds_dsidlo
 1029  hdfs dfs -getmerge hdfs://production/tmp/ds_dsidlo.xx xxx


The following works, but the file size is only 1k. 
> hdfs dfs -getmerge hdfs://production/user/ds_dsidlo xxx

 1013  hdfs dfs -getmerge hdfs://production/apps/hive/warehouse/dgs_tmp.db/* xxx
 1019  hdfs dfs -getmerge hdfs://production/user/ds_dsidlo xxx
 1028* hdfs dfs -getmerge hdfs://production/tmp/ds_dsidlo
 1029  hdfs dfs -getmerge hdfs://production/tmp/ds_dsidlo.xx xxx


So, it may be that the total data set size makes a difference.


> "hadoop fs -getmerge" docs, .../dir does not work, .../dir/* works.
> ---
>
> Key: HADOOP-13510
> URL: https://issues.apache.org/jira/browse/HADOOP-13510
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
> Environment: HDP 2.4.2
>Reporter: David Sidlo
>Priority: Minor
>  Labels: dfs, fs, getmerge, hadoop, hdfs
>
> Docs indicate that the following command would work...
>hadoop fs -getmerge -nl /src /opt/output.txt
> For me, it results in a zero-length file /opt/output.txt.
> But the following does work...
>hadoop fs -getmerge -nl /src/* /opt/output.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426882#comment-15426882
 ] 

Hadoop QA commented on HADOOP-7363:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} root generated 0 new + 709 unchanged - 1 fixed = 709 
total (was 710) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
3s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824403/HADOOP-7363.04.patch |
| JIRA Issue | HADOOP-7363 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27db1e9ad139 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0da69c3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10298/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10298/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch
>
>
> FileSystemContractBaseTest is supposed to be run with 

[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-08-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13105:
---
Release Note: This patch adds two new config keys for supporting timeouts 
in LDAP query operations. The property 
"hadoop.security.group.mapping.ldap.connection.timeout.ms" is the connection 
timeout (in milliseconds), within which period if the LDAP provider doesn't 
establish a connection, it will abort the connect attempt. The property 
"hadoop.security.group.mapping.ldap.read.timeout.ms" is the read timeout (in 
milliseconds), within which period if the LDAP provider doesn't get a LDAP 
response, it will abort the read attempt.

Added release notes. Feel free to refine it. Thanks.

> Support timeouts in LDAP queries in LdapGroupsMapping.
> --
>
> Key: HADOOP-13105
> URL: https://issues.apache.org/jira/browse/HADOOP-13105
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, 
> HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch
>
>
> {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
> This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426824#comment-15426824
 ] 

Hadoop QA commented on HADOOP-13388:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 0 unchanged - 11 fixed = 0 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824398/HADOOP-13388.02.patch 
|
| JIRA Issue | HADOOP-13388 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e8dd6d0b420 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0da69c3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10297/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10297/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10297/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
> 

[jira] [Comment Edited] (HADOOP-13510) "hadoop fs -getmerge" docs, .../dir does not work, .../dir/* works.

2016-08-18 Thread David Sidlo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426869#comment-15426869
 ] 

David Sidlo edited comment on HADOOP-13510 at 8/18/16 5:43 PM:
---

The issue may be based on the total dataset size.

The following command does not work. But, will work with the addition of "/*". 
The resulting file size would be 4G and 17 files get merged (when it works).
> hdfs dfs -getmerge hdfs://production/apps/hive/warehouse/dgs_tmp.db xxx

[ds_dsidlo@prdslsldsafht11 ~]$ hdfs dfs -ls 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/*
Found 17 items
-rw-r--r--   3 ds_dsidlo hdfs  275883517 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/00_0
-rw-r--r--   3 ds_dsidlo hdfs  273756223 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/01_0
-rw-r--r--   3 ds_dsidlo hdfs  141912289 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/02_0
-rw-r--r--   3 ds_dsidlo hdfs  141916055 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/03_0
-rw-r--r--   3 ds_dsidlo hdfs  141912300 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/04_0
-rw-r--r--   3 ds_dsidlo hdfs  141913088 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/05_0
-rw-r--r--   3 ds_dsidlo hdfs  141914384 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/06_0
-rw-r--r--   3 ds_dsidlo hdfs  141915583 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/07_0
-rw-r--r--   3 ds_dsidlo hdfs  141912741 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/08_0
-rw-r--r--   3 ds_dsidlo hdfs  131615833 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/09_0
-rw-r--r--   3 ds_dsidlo hdfs  130790330 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/10_0
-rw-r--r--   3 ds_dsidlo hdfs  130257009 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/11_0
-rw-r--r--   3 ds_dsidlo hdfs  129981971 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/12_0
-rw-r--r--   3 ds_dsidlo hdfs  129647880 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/13_0
-rw-r--r--   3 ds_dsidlo hdfs  129046552 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/14_0
-rw-r--r--   3 ds_dsidlo hdfs  128769076 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/15_0
-rw-r--r--   3 ds_dsidlo hdfs   72740496 2015-12-11 15:04 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints/16_0
Found 3 items
-rw-r--r--   3 ds_dsidlo hdfs   26091915 2015-12-09 15:06 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_fin/00_0
-rw-r--r--   3 ds_dsidlo hdfs   26061567 2015-12-09 15:06 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_fin/01_0
-rw-r--r--   3 ds_dsidlo hdfs   26117465 2015-12-09 15:06 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_fin/02_0
Found 10 items
-rw-r--r--   3 ds_dsidlo hdfs  260920570 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/00_0
-rw-r--r--   3 ds_dsidlo hdfs  258917310 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/01_0
-rw-r--r--   3 ds_dsidlo hdfs  258702653 2015-12-11 14:38 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/02_0
-rw-r--r--   3 ds_dsidlo hdfs  257919368 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/03_0
-rw-r--r--   3 ds_dsidlo hdfs  257411938 2015-12-11 14:38 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/04_0
-rw-r--r--   3 ds_dsidlo hdfs  257154203 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/05_0
-rw-r--r--   3 ds_dsidlo hdfs  256840629 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/06_0
-rw-r--r--   3 ds_dsidlo hdfs  256269772 2015-12-11 14:39 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/07_0
-rw-r--r--   3 ds_dsidlo hdfs  256005409 2015-12-11 14:38 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/08_0
-rw-r--r--   3 ds_dsidlo hdfs   68796269 2015-12-11 14:38 
hdfs://production/apps/hive/warehouse/dgs_tmp.db/hints_test/09_0

The following works, but the file size is only 1k. 
> hdfs dfs -getmerge hdfs://production/user/ds_dsidlo xxx

[ds_dsidlo@prdslsldsafht11 ~]$ hdfs dfs -ls hdfs://production/user/ds_dsidlo
Found 10 items
drwx--   - ds_dsidlo ds_dsidlo  0 2015-12-18 05:00 
hdfs://production/user/ds_dsidlo/.Trash
drwxr-xr-x   - ds_dsidlo ds_dsidlo  0 2016-07-01 18:06 
hdfs://production/user/ds_dsidlo/.hiveJars
drwxr-xr-x   - ds_dsidlo ds_dsidlo  0 2016-07-18 10:57 
hdfs://production/user/ds_dsidlo/.sparkStaging

[jira] [Updated] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13513:
---
Affects Version/s: (was: 2.9.0)

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13513:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.9.0)
   2.8.0
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I ran all tests successfully against an Azure Storage 
account in West US, using both JDK 8 and JDK 7.  I have committed this to 
trunk, branch-2 and branch-2.8.  [~tibor.k...@gmail.com], thank you for fixing 
a bug I introduced.  [~ste...@apache.org], thank you for help with the code 
review.

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12858) Reduce UGI getGroups overhead

2016-08-18 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-12858:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Reduce UGI getGroups overhead
> -
>
> Key: HADOOP-12858
> URL: https://issues.apache.org/jira/browse/HADOOP-12858
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-12858.patch, HADOOP-12858.patch
>
>
> Group lookup generates excessive garbage with multiple conversions between 
> collections and arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Attachment: HADOOP-13252-006.patch

Patch 006

fixed to work against trunk. The issue there is that because trunk returns the 
default value correctly, we can't pass in an interface there. Instead 
{{Configuration.getInstances()}} was copied into {{S3AUtils}} and tweaked to 
only do the list of classes (with assignment check) rather than the full 
instantiation.

* adds tests to generate and validate failure modes
* factors out the error text strings looked for in the tests
* also fixed test teardown to handle null file context, including in the hadoop 
common base class. 

Tested: S3 ireland

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-006.patch, HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Status: Patch Available  (was: Open)

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-006.patch, HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426936#comment-15426936
 ] 

Tibor Kiss commented on HADOOP-13513:
-

Thanks [~cnauroth] and [~ste...@apache.org] for the prompt review!

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing failing

2016-08-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13517:
---

 Summary: TestS3NContractRootDir.testRecursiveRootListing failing
 Key: HADOOP-13517
 URL: https://issues.apache.org/jira/browse/HADOOP-13517
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


while doing s3a tests against trunk, one of the S3n tests, 
{{TestS3NContractRootDir.testRecursiveRootListing}} failed. 

This may be a failure of recursive listing of an empty root directory; it's 
transient because of deletion inconsistencies means the problem doesn't
always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426945#comment-15426945
 ] 

Hudson commented on HADOOP-13513:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10301 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10301/])
HADOOP-13513. Java 1.7 support for org.apache.hadoop.fs.azure testcases. 
(cnauroth: rev ae4db2544346370404826d5b55b2678f5f92fe1f)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AbstractWasbTestBase.java


> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)
Tibor Kiss created HADOOP-13513:
---

 Summary: Java 1.7 support for org.apache.hadoop.fs.azure testcases
 Key: HADOOP-13513
 URL: https://issues.apache.org/jira/browse/HADOOP-13513
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure
Affects Versions: 2.9.0
Reporter: Tibor Kiss
Assignee: Tibor Kiss
Priority: Minor
 Fix For: 2.9.0


Recent improvement on AzureNativeFileSystem rename/delete performance 
(HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible with 
Java 1.7. 

If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
(e.g. 2.7.x) the following error occurs during test run:
{code}
initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
  Time elapsed: 0.001 sec  <<< ERROR!
java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
should be public
at 
org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
at 
org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
at 
org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
at 
org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
at 
org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
at org.junit.runners.ParentRunner.(ParentRunner.java:76)
at 
org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
at 
org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
at 
org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
at 
org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12554) Swift client to read credentials from a credential provider

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426149#comment-15426149
 ] 

Steve Loughran commented on HADOOP-12554:
-

+ change to docs.

I was thinking: we can't force in a test to use credentials, but actually we 
can; you need a test case which takes the secrets using this method, saves it 
to a credential file, then loads a new FS instance via a config set to use the 
credentials. Larry: we could also do this for s3 and Azure BTW.

regarding the code: the username should to go into the credentials file too, so 
that you can have a file with both user and password

> Swift client to read credentials from a credential provider
> ---
>
> Key: HADOOP-12554
> URL: https://issues.apache.org/jira/browse/HADOOP-12554
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-12554.001.patch
>
>
> As HADOOP-12548 is going to do for s3, Swift should be reading credentials, 
> particularly passwords, from a credential provider. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13512:
---
Affects Version/s: (was: 2.8.0)
 Target Version/s: 2.7.4  (was: 2.8.0)
   Status: Patch Available  (was: Open)

> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13512.000.patch
>
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission's string constructor fails on valid permissions like "1777"

2016-08-18 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-13508:
--
Attachment: HADOOP-13508-2.patch

> FsPermission's string constructor fails on valid permissions like "1777"
> 
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-13508-1.patch, HADOOP-13508-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426133#comment-15426133
 ] 

Steve Loughran commented on HADOOP-13513:
-

Tibor, looks simple, but even for  a one-liner we need to follow the object 
store patch policy: which Azure endpoint did you test against?

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426001#comment-15426001
 ] 

Mingliang Liu commented on HADOOP-13512:


https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/testReport/org.apache.hadoop.security.ssl/TestReloadingX509TrustManager/testReload/
 is a potential UT failure because of this.

> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13512:
--

 Summary: ReloadingX509TrustManager should keep reloading in case 
of exception
 Key: HADOOP-13512
 URL: https://issues.apache.org/jira/browse/HADOOP-13512
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the key 
store file's last modified time to decide whether to reload.  This is to avoid 
unnecessary reload if the key store file is not changed. To do this, it 
maintains an internal state {{lastLoaded}} whenever it tries to reload a file. 
It also updates the {{lastLoaded}} variable in case of exception so failing 
reload will not be retried until the key store file's last modified time 
changes again.

Chances are that the reload happens when the key store file is being written. 
The reload fails (probably with EOFException) and won't load until key store 
files's last modified time changes. After a short period, the key store file is 
closed after update. However, the last modified time may not be updated as if 
it's in the same precision period (e.g. 1 second). In this case, the updated 
key store file is never reloaded.

A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
{{ReloadingX509TrustManager}} will keep reloading in case of exception.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000

2016-08-18 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426000#comment-15426000
 ] 

shimingfei commented on HADOOP-13498:
-

+1

> the number of multi-part upload part should not bigger than 1
> -
>
> Key: HADOOP-13498
> URL: https://issues.apache.org/jira/browse/HADOOP-13498
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13498-HADOOP-12756.001.patch, 
> HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch
>
>
> We should not only throw exception when exceed 1 limit of multi-part 
> number, but should guarantee to upload any object no matter how big it is. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426001#comment-15426001
 ] 

Mingliang Liu edited comment on HADOOP-13512 at 8/18/16 7:16 AM:
-

https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/testReport/org.apache.hadoop.security.ssl/TestReloadingX509TrustManager/testReload/
 is a potential UT failure because of this.

h6.Error Message
{code}
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-08-15 07:00:59,364

"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
"Truststore reloader thread" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:194)
at java.lang.Thread.run(Thread.java:745)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"Thread-1"  prio=5 tid=20 runnable
java.lang.Thread.State: RUNNABLE
at java.lang.Thread.dumpThreads(Native Method)
at java.lang.Thread.getAllStackTraces(Thread.java:1603)
at 
org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
at 
org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:271)
at 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
"main"  prio=5 tid=1 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1253)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}
h6. Stacktrace
{code}
java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread 

[jira] [Commented] (HADOOP-13499) Support session credentials for authenticating with Aliyun

2016-08-18 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426100#comment-15426100
 ] 

shimingfei commented on HADOOP-13499:
-

+1 Thanks [~uncleGen]

> Support session credentials for authenticating with Aliyun
> --
>
> Key: HADOOP-13499
> URL: https://issues.apache.org/jira/browse/HADOOP-13499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13499-HADOOP-12756.001.patch, 
> HADOOP-13499-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HADOOP-13513:

Status: Patch Available  (was: Open)

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tibor Kiss updated HADOOP-13513:

Attachment: HADOOP-13513-001.patch

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13512:
---
Attachment: HADOOP-13512.000.patch

> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13512.000.patch
>
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426051#comment-15426051
 ] 

Hadoop QA commented on HADOOP-13512:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824295/HADOOP-13512.000.patch
 |
| JIRA Issue | HADOOP-13512 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e17ccb62fd5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 913a895 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10294/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10294/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13512.000.patch
>
>
> 

[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426086#comment-15426086
 ] 

Hadoop QA commented on HADOOP-13513:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824303/HADOOP-13513-001.patch
 |
| JIRA Issue | HADOOP-13513 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6b97ebad5190 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 913a895 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10296/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10296/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test 

[jira] [Commented] (HADOOP-13508) FsPermission's string constructor fails on valid permissions like "1777"

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426054#comment-15426054
 ] 

Hadoop QA commented on HADOOP-13508:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 19 unchanged - 0 fixed = 21 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.fs.TestHarFileSystemBasics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824296/HADOOP-13508-2.patch |
| JIRA Issue | HADOOP-13508 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8f8a33ac323a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 913a895 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10295/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10295/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10295/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10295/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Created] (HADOOP-13515) Redundant transitionToActive call can cause a NameNode to crash

2016-08-18 Thread Harsh J (JIRA)
Harsh J created HADOOP-13515:


 Summary: Redundant transitionToActive call can cause a NameNode to 
crash
 Key: HADOOP-13515
 URL: https://issues.apache.org/jira/browse/HADOOP-13515
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.5.0
Reporter: Harsh J
Priority: Minor


The situation in parts is similar to HADOOP-8217, but the cause is different 
and so is the result.

Consider this situation:

- At the beginning NN1 is Active, NN2 is Standby
- ZKFC1 faces a ZK disconnect (not a session timeout, just a socket disconnect) 
and thereby reconnects

{code}
2016-08-11 07:00:46,068 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 4000ms for sessionid 
0x4566f0c97500bd9, closing socket connection and attempting reconnect
2016-08-11 07:00:46,169 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
…
2016-08-11 07:00:46,610 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
connected.
{code}

- The reconnection on the ZKFC1 triggers the elector code, and the elector 
re-run finds that NN1 should be the new active (a redundant decision cause NN1 
is already active)

{code}
2016-08-11 07:00:46,615 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Checking for any old active which needs to be fenced...
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old 
node exists: …
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: But old 
node has our own data, so don't need to fence it.
{code}

- The ZKFC1 sets the new ZK data, and fires a NN1 RPC call of transitionToActive

{code}
2016-08-11 07:00:46,630 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing 
znode /hadoop-ha/nameservice1/ActiveBreadCrumb to indicate that the local node 
is the most recent active...
2016-08-11 07:00:46,649 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 175: 
Call -> nn01/10.10.10.10:8022: transitionToActive {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
{code}

- At the same time as the transitionToActive call is in progress at NN1, but 
not complete yet, the ZK session of ZKFC1 is timed out by ZK Quorum, and a 
watch notification is sent to ZKFC2

{code}
2016-08-11 07:01:00,003 DEBUG org.apache.zookeeper.ClientCnxn: Got notification 
sessionid:0x4566f0c97500bde
2016-08-11 07:01:00,004 DEBUG org.apache.zookeeper.ClientCnxn: Got WatchedEvent 
state:SyncConnected type:NodeDeleted 
path:/hadoop-ha/nameservice1/ActiveStandbyElectorLock for sessionid 
0x4566f0c97500bde
{code}

- ZKFC2 responds by marking NN2 as standby, which succeeds (NN hasn't handled 
transitionToActive call yet due to busy status, but has handled 
transitionToStandby before it)

{code}
2016-08-11 07:01:00,013 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Checking for any old active which needs to be fenced...
2016-08-11 07:01:00,018 INFO org.apache.hadoop.ha.ZKFailoverController: Should 
fence: NameNode at nn01/10.10.10.10:8022
2016-08-11 07:01:00,020 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 412: 
Call -> nn01/10.10.10.10:8022: transitionToStandby {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
2016-08-11 07:01:03,880 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: 
transitionToStandby took 3860ms
{code}

- ZKFC2 then marks NN2 as active, and NN2 begins its transition (is in midst of 
it, not done yet at this point)

{code}
2016-08-11 07:01:03,894 INFO org.apache.hadoop.ha.ZKFailoverController: Trying 
to make NameNode at nn02/11.11.11.11:8022 active...
2016-08-11 07:01:03,895 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: 412: 
Call -> nn02/11.11.11.11:8022: transitionToActive {reqInfo { reqSource: 
REQUEST_BY_ZKFC }}
…
{code}

{code}
2016-08-11 07:01:09,558 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for active state
…
2016-08-11 07:01:19,968 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
{code}

- At the same time in parallel NN1 processes the transitionToActive requests 
finally, and becomes active

{code}
2016-08-11 07:01:13,281 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for active state
…
2016-08-11 07:01:19,599 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
…
2016-08-11 07:01:19,602 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Starting log segment at 5635
{code}

- NN2's active transition fails as a result of this parallel active transition 
on NN1 which has completed right before it tries to take over

{code}
2016-08-11 07:01:19,968 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Will take over writing 
edit logs at txnid 5635
2016-08-11 07:01:22,799 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: 
Error encountered requiring NN 

[jira] [Created] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-13514:
---

 Summary: Upgrade surefire to 2.19.1
 Key: HADOOP-13514
 URL: https://issues.apache.org/jira/browse/HADOOP-13514
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ewan Higgs
Priority: Minor


A lot of people working on Hadoop don't want to run all the tests when they 
develop; only the bits they're working on. Surefire 2.19 introduced more useful 
test filters which let us run a subset of the tests that brings the build time 
down from 'come back tomorrow' to 'grab a coffee'.

For instance, if I only care about the S3 adaptor, I might run:

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\"
{code}

We can work around this by specifying the surefire version on the command line 
but it would be better, imo, to just update the default surefire used.

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13510) "hadoop fs -getmerge" docs, .../dir does not work, .../dir/* works.

2016-08-18 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426502#comment-15426502
 ] 

Rushabh S Shah commented on HADOOP-13510:
-

The following command works for me.
I am running hadoop 2.7 with some internal fixes.
But none of the fixes are related to getmerge command.
{noformat}
hadoop fs -getmerge -nl /user/rushabhs/tmp1 /tmp/merge.java
{noformat}
May be if you can attach all the files in src folder (or a subset of the files 
which are enough to reproduce this failure) then I can try running the command 
with that files and check whether I encounter the bug.
But according to me it should work just fine.
Correct me if I am missing something here.

> "hadoop fs -getmerge" docs, .../dir does not work, .../dir/* works.
> ---
>
> Key: HADOOP-13510
> URL: https://issues.apache.org/jira/browse/HADOOP-13510
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
> Environment: HDP 2.4.2
>Reporter: David Sidlo
>Priority: Minor
>  Labels: dfs, fs, getmerge, hadoop, hdfs
>
> Docs indicate that the following command would work...
>hadoop fs -getmerge -nl /src /opt/output.txt
> For me, it results in a zero-length file /opt/output.txt.
> But the following does work...
>hadoop fs -getmerge -nl /src/* /opt/output.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13405:

   Resolution: Fixed
Fix Version/s: (was: 3.0.0-alpha2)
   2.8.0
   Status: Resolved  (was: Patch Available)

+1
committed, thanks. 

I haven't been able to list you as the contributor; JIRA has overloaded with 
the list of contributors —trying to get some help here as we like to 
acknowledge everyone's effort.

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13430) Optimize and fix getFileStatus in S3A

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426425#comment-15426425
 ] 

Steve Loughran commented on HADOOP-13430:
-

Steven —I'm afraid all the HADOOP-13208 changes have touched getFileStatus 
enough that this patch won't apply.


could you sync this up with trunk/branch-2, then use the naming scheme 
HADOOP-13430-branch-2-001.patch, or HADOOP-13430-001.path for trunk...that way 
Yetus will apply and build the right version.

# I wouldn't use {{sillyCase()}} as a method name, something like 
{{getMetadata()}}
# there's a lot of translation of exceptions there; maybe it's time to split 
{{getFileStatus()}} into the public method and an {{innerGetFileStatus()}} 
which doesn't do any translation (though it will need to catch 
AmazonServiceException & swallow 404's, rethrowing anything else

> Optimize and fix getFileStatus in S3A
> -
>
> Key: HADOOP-13430
> URL: https://issues.apache.org/jira/browse/HADOOP-13430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Steven K. Wong
>Priority: Minor
> Attachments: HADOOP-13430.001.WIP.patch
>
>
> Currently, S3AFileSystem.getFileStatus(Path f) sends up to 3 requests to S3 
> when pathToKey(f) = key = "foo/bar" is a directory:
> 1. HEAD key=foo/bar \[continue if not found]
> 2. HEAD key=foo/bar/ \[continue if not found]
> 3. LIST prefix=foo/bar/ delimiter=/ max-keys=1
> My experience (and generally true, I reckon) is that almost all directories 
> are nonempty directories without a "fake directory" file (e.g. "foo/bar/"). 
> Under this condition, request #2 is mostly unhelpful; it only slows down 
> getFileStatus. Therefore, I propose swapping the order of requests #2 and #3. 
> The swapped HEAD request will be skipped in practically all cases.
> Furthermore, when key = "foo/bar" is a nonempty directory that contains a 
> "fake directory" file (in addition to actual files), getFileStatus currently 
> returns an S3AFileStatus with isEmptyDirectory=true, which is wrong. Swapping 
> will fix this. The swapped LIST request will use max-keys=2 to determine 
> isEmptyDirectory correctly. (Removing the delimiter from the LIST request 
> should make the logic a little simpler than otherwise.)
> Note that key = "foo/bar/" has the same problem with isEmptyDirectory. To fix 
> it, I propose skipping request #1 when key ends with "/". The price is this 
> will, for an empty directory, replace a HEAD request with a LIST request 
> that's generally more taxing on S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13514:

Affects Version/s: 2.8.0
  Component/s: build
   Issue Type: Improvement  (was: Task)

> Upgrade surefire to 2.19.1
> --
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Priority: Minor
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13430) Optimize and fix getFileStatus in S3A

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13430:

Assignee: Steven K. Wong

> Optimize and fix getFileStatus in S3A
> -
>
> Key: HADOOP-13430
> URL: https://issues.apache.org/jira/browse/HADOOP-13430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Steven K. Wong
>Priority: Minor
> Attachments: HADOOP-13430.001.WIP.patch
>
>
> Currently, S3AFileSystem.getFileStatus(Path f) sends up to 3 requests to S3 
> when pathToKey(f) = key = "foo/bar" is a directory:
> 1. HEAD key=foo/bar \[continue if not found]
> 2. HEAD key=foo/bar/ \[continue if not found]
> 3. LIST prefix=foo/bar/ delimiter=/ max-keys=1
> My experience (and generally true, I reckon) is that almost all directories 
> are nonempty directories without a "fake directory" file (e.g. "foo/bar/"). 
> Under this condition, request #2 is mostly unhelpful; it only slows down 
> getFileStatus. Therefore, I propose swapping the order of requests #2 and #3. 
> The swapped HEAD request will be skipped in practically all cases.
> Furthermore, when key = "foo/bar" is a nonempty directory that contains a 
> "fake directory" file (in addition to actual files), getFileStatus currently 
> returns an S3AFileStatus with isEmptyDirectory=true, which is wrong. Swapping 
> will fix this. The swapped LIST request will use max-keys=2 to determine 
> isEmptyDirectory correctly. (Removing the delimiter from the LIST request 
> should make the logic a little simpler than otherwise.)
> Note that key = "foo/bar/" has the same problem with isEmptyDirectory. To fix 
> it, I propose skipping request #1 when key ends with "/". The price is this 
> will, for an empty directory, replace a HEAD request with a LIST request 
> that's generally more taxing on S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13405:

Attachment: HADOOP-13405.patch

final patch as applied

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch, HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Tibor Kiss (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426504#comment-15426504
 ] 

Tibor Kiss commented on HADOOP-13513:
-

Thanks [~ste...@apache.org] for the quick review. 
I was not aware that we need to do Azure endpoint testing. I did not find any 
details on how to test it manually. Could you please give me some pointers to 
Azure related testing? 


> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13271.
-
   Resolution: Cannot Reproduce
Fix Version/s: 2.8.0

> Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
> -
>
> Key: HADOOP-13271
> URL: https://issues.apache.org/jira/browse/HADOOP-13271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> I'm seeing an intermittent failure of 
> {{TestS3AContractRootDir.testListEmptyRootDirectory}}
> The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the 
> file to delete is root ...yet the code is passing in the children of /, not / 
> itself.
> hypothesis: when you call listStatus on an empty root dir, you get a file 
> entry back that says isFile, not isDirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13252:

Status: Open  (was: Patch Available)

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426489#comment-15426489
 ] 

Steve Loughran commented on HADOOP-13514:
-

Seems reasonable —care to contribute a patch?

Regarding S3 tests, try changing to the {{hadoop-aws}} directory, then going
{code}
mvt   -Pparallel-tests -DtestsThreadCount=8
{code}

tests run in about the time it takes to brew a cup of tea.

> Upgrade surefire to 2.19.1
> --
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ewan Higgs
>Priority: Minor
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Shaik Idris Ali (JIRA)
Shaik Idris Ali created HADOOP-13516:


 Summary: Listing an empty s3a NON root directory throws 
FileNotFound.
 Key: HADOOP-13516
 URL: https://issues.apache.org/jira/browse/HADOOP-13516
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.0
Reporter: Shaik Idris Ali
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0


With an empty s3 bucket and run

{code}
$ hadoop fs -D... -ls s3a://hdfs-s3a-test/

15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
ls: `s3a://hdfs-s3a-test/': No such file or directory

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13516:

Hadoop Flags:   (was: Reviewed)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Priority: Minor
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13516:

Fix Version/s: (was: 2.8.0)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Priority: Minor
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13516:

Labels:   (was: BB2015-05-TBR s3)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426678#comment-15426678
 ] 

Steve Loughran commented on HADOOP-13252:
-

yes, I see this
I do suspect it is that patch and will fix. 

I am *really* surprised that isn't in trunk. What happened there? We should get 
it into branch-3.0 too

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13405) doc for “fs.s3a.acl.default” indicates incorrect values

2016-08-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13405:
---
Assignee: Shen Yinjie

[~shenyinjie], thank you for the patch.  I've added you as a contributor on the 
Hadoop project and assigned the issue to you for acknowledgment.

For whatever reason, I'm always able to use Firefox (but not Chrome) to add 
more contributors even when the list is heavily loaded.  Why the problem is 
browser-specific and not common to the JIRA back-end is completely baffling to 
me, but that's the workaround I use.

> doc for “fs.s3a.acl.default” indicates incorrect values
> ---
>
> Key: HADOOP-13405
> URL: https://issues.apache.org/jira/browse/HADOOP-13405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13405.patch, HADOOP-13405.patch
>
>
> description for "fs.s3a.acl.default" indicates its values are 
> "private,public-read";
> when set value be public-read,and excute 'hdfs dfs -ls s3a://hdfs/'
> {{-ls: No enum constant 
> com.amazonaws.services.s3.model.CannedAccessControlList.public-read}}
> while in amazon-sdk ,
> {code}
> public enum CannedAccessControlList {
>   Private("private"),
>   PublicRead("public-read"),
>   PublicReadWrite("public-read-write"),
>   AuthenticatedRead("authenticated-read"),
>   LogDeliveryWrite("log-delivery-write"),
>   BucketOwnerRead("bucket-owner-read"),
>   BucketOwnerFullControl("bucket-owner-full-control"); 
> {code}
> so values should be enum values as "Private","PublicRead"...
>  attached simple patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13516:

Assignee: (was: Lei (Eddy) Xu)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Priority: Minor
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13516.
-
   Resolution: Duplicate
 Assignee: Lei (Eddy) Xu
Fix Version/s: 2.8.0

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13516:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-11694)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426694#comment-15426694
 ] 

Chris Nauroth commented on HADOOP-13252:


bq. I am really surprised that isn't in trunk. What happened there? We should 
get it into branch-3.0 too

HADOOP-7851 is in trunk and branch-3.0.0-alpha1, but I don't see it in any of 
the 2.x-based branches.  Whether or not that's intentional is hard to say.  Its 
fix version is 0.23.1.  In the past, I have found a couple of patches from the 
0.23 timeframe that were forgotten in the transition to branch-1 and branch-2.

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch, 
> HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Shaik Idris Ali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaik Idris Ali updated HADOOP-13516:
-
Description: 
With an empty s3 bucket and run

{code}
$ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory

15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory

{code}

  was:
With an empty s3 bucket and run

{code}
$ hadoop fs -D... -ls s3a://hdfs-s3a-test/

15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
ls: `s3a://hdfs-s3a-test/': No such file or directory

{code}


> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Shaik Idris Ali (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426651#comment-15426651
 ] 

Shaik Idris Ali commented on HADOOP-13516:
--

HADOOP-11694 fixes only the case of empty root directory, however listing a 
empty non-root directory still throws FileNonFoundException.
{code}
key = maybeAddTrailingSlash(key); //If this line is commented, then 
objects.getCommonPrefixes().size is greater than 1 for valid empty directories. 
with nextMarker like dir/empty-ids-1449644026143/
  ListObjectsRequest request = new ListObjectsRequest();
  request.setBucketName(bucket);
  request.setPrefix(key);
  request.setDelimiter("/");
  request.setMaxKeys(1);
{code}

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13513) Java 1.7 support for org.apache.hadoop.fs.azure testcases

2016-08-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426652#comment-15426652
 ] 

Chris Nauroth commented on HADOOP-13513:


Hello [~tibor.k...@gmail.com].  Steve is referring to this policy in the Hadoop 
contribution wiki:

https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure

To summarize, for modules like hadoop-azure, hadoop-aws, etc., there are either 
no unit tests running during Apache pre-commit, or there are unit tests in the 
module, but we consider them insufficient to validate a patch for commit.  
Instead, we require the contributor to run integration tests directly against 
the back-end service.  These are still structured as JUnit tests under the 
src/test/java directory, but the tests get skipped unless the developer does 
some extra configuration to make the necessary credentials available.  The 
exact configuration process varies per file system:

http://hadoop.apache.org/docs/r2.7.2/hadoop-aws/tools/hadoop-aws/index.html

http://hadoop.apache.org/docs/r2.7.2/hadoop-azure/index.html#Testing_the_hadoop-azure_Module

For this patch, I'm going to kick off my own test run for you now.  We 
currently have a test failure on branch-2 and branch-2.8 because of this, so I 
want to get it fixed quickly.  You can keep this information in mind for future 
patches though.

> Java 1.7 support for org.apache.hadoop.fs.azure testcases
> -
>
> Key: HADOOP-13513
> URL: https://issues.apache.org/jira/browse/HADOOP-13513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.9.0
>Reporter: Tibor Kiss
>Assignee: Tibor Kiss
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13513-001.patch
>
>
> Recent improvement on AzureNativeFileSystem rename/delete performance 
> (HADOOP-13403) yielded a test change  (HADOOP-13459) which is incompatible 
> with Java 1.7. 
> If one tries to include those patches in a Java 1.7 compatible Hadoop tree 
> (e.g. 2.7.x) the following error occurs during test run:
> {code}
> initializationError(org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging)
>   Time elapsed: 0.001 sec  <<< ERROR!
> java.lang.Exception: Class org.apache.hadoop.fs.azure.AbstractWasbTestBase 
> should be public
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoid(FrameworkMethod.java:91)
>   at 
> org.junit.runners.model.FrameworkMethod.validatePublicVoidNoArg(FrameworkMethod.java:70)
>   at 
> org.junit.runners.ParentRunner.validatePublicVoidNoArgMethods(ParentRunner.java:133)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.validateInstanceMethods(BlockJUnit4ClassRunner.java:165)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.collectInitializationErrors(BlockJUnit4ClassRunner.java:104)
>   at org.junit.runners.ParentRunner.validate(ParentRunner.java:355)
>   at org.junit.runners.ParentRunner.(ParentRunner.java:76)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.(BlockJUnit4ClassRunner.java:57)
>   at 
> org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
>   at 
> org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
>   at 
> org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}
> The problem can be resolved by setting {{AbstractWasbTestBase}} to {{public}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-13516:
-
  Assignee: Steve Loughran  (was: Lei (Eddy) Xu)

> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426659#comment-15426659
 ] 

Steve Loughran commented on HADOOP-13516:
-

sprry, race condition confusion;re-opending.  And, given that LoC, my code. 

Which version of the hadoop source tree are you looking at. As that sounds like 
you are working with Branch 2.8+?



> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS

2016-08-18 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426677#comment-15426677
 ] 

Wei-Chiu Chuang commented on HADOOP-13396:
--

Hi [~xiaochen] thanks again for the new patch.

1. 
{code}
auditLog.info(new String(output.toByteArray(), "UTF-8"));
{code}
can be changed to 
{code}
auditLog.info(output.toString("UTF-8"));
{code}

2. {{KMSAudit#initializeAuditLoggers}} I think it's fine to keep it as is. But 
if we ever want to add new type of audit loggers, I would suggest to change 
this to a factory method. I don't feel strongly about this, so this is just a 
note.


Other than this, I think this patch is ready.

> Add json format audit logging to KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, 
> HADOOP-13396.06.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.
> A json format log extension is added using the refactored API, and being 
> turned off by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-7363:
-
Attachment: HADOOP-7363.04.patch

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-08-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426746#comment-15426746
 ] 

Andras Bokor commented on HADOOP-13388:
---

Thanks for the review [~anu] and [~liuml07].

[~anu]:
# The import changes was done by my IDE. The Assert got wildcard when I 
imported assertFalse. I think wildcard is reasonable there since assertFalse 
the 4th method from the class
# I thought assertFalse is more readable here because we have an asserTrue 
followed by a delete so assertFalse seemed more straightforward.
# Good point. I put them into the try. I had to add some null check to the 
{{cleanup}} method because if we have an exception while creating the file the 
cleanup will throw NPE which will hide the real exception.
# Yes, I do not really understand the purpose of that check. {{getGroups}} 
cannot return with null and on Unix users have to belong to at least one group. 
I would remove the whole block.

[~liuml07]
# Yes, the order was wrong. Fixed.
# Please check 4th point from above
# Replacing for by while only has readability purposes. While seems more 
appropriate here for me (Idea also suggests to do).

What do you guys think about the 2nd patch?

> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13388.01.patch
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-08-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13388:
--
Attachment: HADOOP-13388.02.patch

> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13388.01.patch, HADOOP-13388.02.patch
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426958#comment-15426958
 ] 

Hadoop QA commented on HADOOP-13252:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 3 
new + 7 unchanged - 1 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824409/HADOOP-13252-006.patch
 |
| JIRA Issue | HADOOP-13252 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c468bb702c77 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ae4db25 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10299/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10299/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10299/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: 

[jira] [Created] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13518:
---

 Summary: backport HADOOP-9258 to branch-2
 Key: HADOOP-13518
 URL: https://issues.apache.org/jira/browse/HADOOP-13518
 Project: Hadoop Common
  Issue Type: Task
  Components: fs, fs/s3, test
Affects Versions: 2.9.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I've just realised that HADOOP-9228 was never backported to branch 2. It went 
in to branch 1, and into trunk, but not in the bit in the middle.

It adds
-more fs contract tests
-s3 and s3n rename don't let you rename under yourself (and delete)

I'm going to try to create a patch for this, though it'll be tricky given how 
things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13518:

Description: 
I've just realised that HADOOP-9258 was never backported to branch 2. It went 
in to branch 1, and into trunk, but not in the bit in the middle.

It adds
-more fs contract tests
-s3 and s3n rename don't let you rename under yourself (and delete)

I'm going to try to create a patch for this, though it'll be tricky given how 
things have moved around a lot since then. 

  was:
I've just realised that HADOOP-9228 was never backported to branch 2. It went 
in to branch 1, and into trunk, but not in the bit in the middle.

It adds
-more fs contract tests
-s3 and s3n rename don't let you rename under yourself (and delete)

I'm going to try to create a patch for this, though it'll be tricky given how 
things have moved around a lot since then. 


> backport HADOOP-9258 to branch-2
> 
>
> Key: HADOOP-13518
> URL: https://issues.apache.org/jira/browse/HADOOP-13518
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I've just realised that HADOOP-9258 was never backported to branch 2. It went 
> in to branch 1, and into trunk, but not in the bit in the middle.
> It adds
> -more fs contract tests
> -s3 and s3n rename don't let you rename under yourself (and delete)
> I'm going to try to create a patch for this, though it'll be tricky given how 
> things have moved around a lot since then. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13516) Listing an empty s3a NON root directory throws FileNotFound.

2016-08-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426976#comment-15426976
 ] 

Chris Nauroth commented on HADOOP-13516:


I am unable to repro this with a fresh trunk build.  [~shaik.idris], I noticed 
what appears to be a typo in your example: "emtyp" instead of "empty".  Is it 
possible that you ran the ls command on a directory that doesn't exist?

{code}
> hadoop version
Hadoop 3.0.0-alpha2-SNAPSHOT
Source code repository https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
ae4db2544346370404826d5b55b2678f5f92fe1f
Compiled by chris on 2016-08-18T17:56Z
Compiled with protoc 2.5.0
>From source with checksum 4de746edbf65719fec787db317e866a
This command was run using 
/Users/chris/hadoop-deploy-trunk/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-alpha2-SNAPSHOT.jar

> hadoop fs -ls s3a://cnauroth-test-aws-s3a/

> hadoop fs -mkdir s3a://cnauroth-test-aws-s3a/test-empty-dir
[chris@Chriss-MacBook-Pro-2:ttys002] hadoop-deploy-trunk


> hadoop fs -ls s3a://cnauroth-test-aws-s3a/
Found 1 items
drwxrwxrwx   - chris  0 2016-08-18 11:44 
s3a://cnauroth-test-aws-s3a/test-empty-dir

> hadoop fs -ls s3a://cnauroth-test-aws-s3a/test-empty-dir

> hadoop fs -ls s3a://cnauroth-test-aws-s3a/test-emtpy-dir
ls: `s3a://cnauroth-test-aws-s3a/test-emtpy-dir': No such file or directory
{code}


> Listing an empty s3a NON root directory throws FileNotFound.
> 
>
> Key: HADOOP-13516
> URL: https://issues.apache.org/jira/browse/HADOOP-13516
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Shaik Idris Ali
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/emptyDirectory
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/emtpyDirectory': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-13514:

Fix Version/s: 3.0.0-alpha2
   Status: Patch Available  (was: Open)

> Upgrade surefire to 2.19.1
> --
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---
Fix Version/s: (was: 2.1.0-beta)
   3.0.0-alpha1

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9258-8.patch, HADOOP-9528-2.patch, 
> HADOOP-9528-3.patch, HADOOP-9528-4.patch, HADOOP-9528-5.patch, 
> HADOOP-9528-6.patch, HADOOP-9528-7.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12765:
-
Attachment: HADOOP-12765.005.patch

While Min's patch looks good to me, it did not address [~vinayrpet]'s comment. 
So I updated the patch to remove the changes in pom.xml. The code compiles in 
my local tree. If it passes precommit I'll +1 and commit the patch.

{quote}
I wonder whether following change required in both hadoop-kms and https, as 
dependency will be already propogated from hadoop-common.

 
+  org.mortbay.jetty
+  jetty-sslengine
+  test
+


{quote}

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.002.patch, HADOOP-12765.003.patch, HADOOP-12765.004.patch, 
> HADOOP-12765.005.patch, blocking_1.png, blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-13514:

Attachment: surefire-2.19.patch

> Upgrade surefire to 2.19.1
> --
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-18 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13487:
---
Attachment: HADOOP-13487.02.patch

Patch 2 to fix the javac warning. I've tested this in a cluster with 100k 
pre-existing token znodes. Startup took about a minute, and the new code took 
about 1 second.

Appreciate any review / comments.

> Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
> -
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>Assignee: Xiao Chen
> Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch
>
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing flaky

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13517:

Summary: TestS3NContractRootDir.testRecursiveRootListing flaky  (was: 
TestS3NContractRootDir.testRecursiveRootListing failing)

> TestS3NContractRootDir.testRecursiveRootListing flaky
> -
>
> Key: HADOOP-13517
> URL: https://issues.apache.org/jira/browse/HADOOP-13517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> while doing s3a tests against trunk, one of the S3n tests, 
> {{TestS3NContractRootDir.testRecursiveRootListing}} failed. 
> This may be a failure of recursive listing of an empty root directory; it's 
> transient because of deletion inconsistencies means the problem doesn't 
> always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427123#comment-15427123
 ] 

Hadoop QA commented on HADOOP-12765:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} root: The patch generated 0 new + 81 unchanged - 1 
fixed = 81 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
47s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824419/HADOOP-12765.005.patch
 |
| JIRA Issue | HADOOP-12765 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux e17215c58626 

[jira] [Updated] (HADOOP-13428) Fix hadoop-common to generate jdiff

2016-08-18 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13428:

Attachment: HADOOP-13428.3.patch

Attached ver.3 patch, this contains a modification to maven build script that 
automatically add the temporary fix before building jdiff contents, and revert 
it once jdiff build is done. So after this patch, we don't need to manually 
apply the patch to properly generate jdiff contents.

> Fix hadoop-common to generate jdiff
> ---
>
> Key: HADOOP-13428
> URL: https://issues.apache.org/jira/browse/HADOOP-13428
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: HADOOP-13428.1.patch, HADOOP-13428.2.patch, 
> HADOOP-13428.3.patch, metric-system-temp-fix.patch
>
>
> Hadoop-common failed to generate JDiff. We need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing failing

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13517:

Summary: TestS3NContractRootDir.testRecursiveRootListing failing  (was: 
TestS3NContractRootDir.testRecursiveRootListing flaky)

> TestS3NContractRootDir.testRecursiveRootListing failing
> ---
>
> Key: HADOOP-13517
> URL: https://issues.apache.org/jira/browse/HADOOP-13517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> while doing s3a tests against trunk, one of the S3n tests, 
> {{TestS3NContractRootDir.testRecursiveRootListing}} failed. 
> This may be a failure of recursive listing of an empty root directory; it's 
> transient because of deletion inconsistencies means the problem doesn't 
> always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13512) ReloadingX509TrustManager should keep reloading in case of exception

2016-08-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13512:
---
Priority: Critical  (was: Major)

> ReloadingX509TrustManager should keep reloading in case of exception
> 
>
> Key: HADOOP-13512
> URL: https://issues.apache.org/jira/browse/HADOOP-13512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-13512.000.patch
>
>
> {{org.apache.hadoop.security.ssl.TestReloadingX509TrustManager}} checks the 
> key store file's last modified time to decide whether to reload.  This is to 
> avoid unnecessary reload if the key store file is not changed. To do this, it 
> maintains an internal state {{lastLoaded}} whenever it tries to reload a 
> file. It also updates the {{lastLoaded}} variable in case of exception so 
> failing reload will not be retried until the key store file's last modified 
> time changes again.
> Chances are that the reload happens when the key store file is being written. 
> The reload fails (probably with EOFException) and won't load until key store 
> files's last modified time changes. After a short period, the key store file 
> is closed after update. However, the last modified time may not be updated as 
> if it's in the same precision period (e.g. 1 second). In this case, the 
> updated key store file is never reloaded.
> A simple fix is to update the {{lastLoaded}} only when the reload succeeds. 
> {{ReloadingX509TrustManager}} will keep reloading in case of exception.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing failing

2016-08-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13517:

Description: 
while doing s3a tests against trunk, one of the S3n tests, 
{{TestS3NContractRootDir.testRecursiveRootListing}} failed. 

This may be a failure of recursive listing of an empty root directory; it's 
transient because of deletion inconsistencies means the problem doesn't always 
surface

  was:
while doing s3a tests against trunk, one of the S3n tests, 
{{TestS3NContractRootDir.testRecursiveRootListing}} failed. 

This may be a failure of recursive listing of an empty root directory; it's 
transient because of deletion inconsistencies means the problem doesn't
always surface


> TestS3NContractRootDir.testRecursiveRootListing failing
> ---
>
> Key: HADOOP-13517
> URL: https://issues.apache.org/jira/browse/HADOOP-13517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> while doing s3a tests against trunk, one of the S3n tests, 
> {{TestS3NContractRootDir.testRecursiveRootListing}} failed. 
> This may be a failure of recursive listing of an empty root directory; it's 
> transient because of deletion inconsistencies means the problem doesn't 
> always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13517) TestS3NContractRootDir.testRecursiveRootListing failing

2016-08-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427052#comment-15427052
 ] 

Steve Loughran commented on HADOOP-13517:
-

{code}
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDirTests run: 8, 
Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 188.216 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDirRunning 
org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDirTests run: 8, Failures: 
0, Errors: 1, Skipped: 0, Time elapsed: 32.057 sec <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDirtestRecursiveRootListing(org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir)
  Time elapsed: 5.022 sec  <<< ERROR! java.lang.IllegalArgumentException: Can 
not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:163)
at org.apache.hadoop.fs.Path.(Path.java:175)
at org.apache.hadoop.fs.Path.(Path.java:120)
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:589)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1539)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1582)
at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1744)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1743)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1726)
at org.apache.hadoop.fs.FileSystem$6.(FileSystem.java:1825)
at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:1821)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:171)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Running org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPoolTests run: 2, 
Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.659 sec - in 
org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPoolRunning 
org.apache.hadoop.fs.s3a.TestS3AFastOutputStream
{code}

Looking at the stack, it's {{new Path("")}} which is failing

> TestS3NContractRootDir.testRecursiveRootListing failing
> ---
>
> Key: HADOOP-13517
> URL: https://issues.apache.org/jira/browse/HADOOP-13517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> while doing s3a tests against trunk, one of the S3n tests, 
> {{TestS3NContractRootDir.testRecursiveRootListing}} failed. 
> This may be a failure of recursive listing of an empty root directory; it's 
> transient because of deletion inconsistencies means the problem doesn't
> always surface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >