[jira] [Commented] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420650#comment-15420650
 ] 

Hadoop QA commented on HADOOP-13488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 163 unchanged - 4 fixed = 173 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823652/HADOOP-13488.000.patch
 |
| JIRA Issue | HADOOP-13488 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 05c1bf14c938 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d677b68 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10245/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10245/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10245/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10245/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10245/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Have TryOnceThenFail implement 

[jira] [Commented] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420662#comment-15420662
 ] 

Hadoop QA commented on HADOOP-13488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 163 unchanged - 4 fixed = 173 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823653/HADOOP-13488.000.patch
 |
| JIRA Issue | HADOOP-13488 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e615a9a6077f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d677b68 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10246/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10246/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10246/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10246/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10246/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Created] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md

2016-08-15 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HADOOP-13497:
---

 Summary: fix wrong command in CredentialProviderAPI.md
 Key: HADOOP-13497
 URL: https://issues.apache.org/jira/browse/HADOOP-13497
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu
Priority: Trivial


In CredentialProviderAPI.md line 122
{quote}
Example: `hadoop credential create ssl.server.keystore.password 
jceks://file/tmp/test.jceks`
{quote}
should be
{quote}
Example: `hadoop credential create ssl.server.keystore.password -provider 
jceks://file/tmp/test.jceks`
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-08-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420788#comment-15420788
 ] 

Kai Zheng commented on HADOOP-13061:


Thanks Kai for the update! Looks like more changes need to be made according to 
previous refactoring thoughts and discussions.

1. Could we get rid of codec factories? They don't seem to be needed.
2. Could we change {{AbstractErasureCodec}}: 1) remove the interface 
{{ErasureCodec}}; 2) remove the parent of Configured.
3. In AbstractErasureCodec, we could have both coderOptions and codecOptions as 
members. Please refine the get/set methods accordingly.
4. I'm sorry, but could you move the new constants added to {{CoderUtil}} into 
{{CodecUtil}}? This will help keep CoderUtil internal.


> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, 
> HADOOP-13061.06.patch, HADOOP-13061.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md

2016-08-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13497:

Component/s: documentation

> fix wrong command in CredentialProviderAPI.md
> -
>
> Key: HADOOP-13497
> URL: https://issues.apache.org/jira/browse/HADOOP-13497
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Trivial
>
> In CredentialProviderAPI.md line 122
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password 
> jceks://file/tmp/test.jceks`
> {quote}
> should be
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password -provider 
> jceks://file/tmp/test.jceks`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md

2016-08-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-13497:

Attachment: HADOOP-13497.001.patch

> fix wrong command in CredentialProviderAPI.md
> -
>
> Key: HADOOP-13497
> URL: https://issues.apache.org/jira/browse/HADOOP-13497
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Trivial
> Attachments: HADOOP-13497.001.patch
>
>
> In CredentialProviderAPI.md line 122
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password 
> jceks://file/tmp/test.jceks`
> {quote}
> should be
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password -provider 
> jceks://file/tmp/test.jceks`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.

2016-08-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420830#comment-15420830
 ] 

Steve Loughran commented on HADOOP-13446:
-

(note that I do think this may need to be applied to the main branch rather 
than just the s3guard one. WIthout that rebasing or merging branches is going 
to be very, very hard)

> S3Guard: Support running isolated unit tests separate from AWS integration 
> tests.
> -
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13483:
--
Parent Issue: HADOOP-12756  (was: HADOOP-13479)

> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.002.patch, 
> HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13481) User end documents for Aliyun OSS FileSystem

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13481:
--
Parent Issue: HADOOP-12756  (was: HADOOP-13479)

> User end documents for Aliyun OSS FileSystem
> 
>
> Key: HADOOP-13481
> URL: https://issues.apache.org/jira/browse/HADOOP-13481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Parent Issue: HADOOP-12756  (was: HADOOP-13479)

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Synchronized access at 

[jira] [Updated] (HADOOP-13482) Provide hadoop-aliyun oss configuration documents

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13482:
--
Parent Issue: HADOOP-12756  (was: HADOOP-13479)

> Provide hadoop-aliyun oss configuration documents
> -
>
> Key: HADOOP-13482
> URL: https://issues.apache.org/jira/browse/HADOOP-13482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-12756:
--
Comment: was deleted

(was: [HADOOP-13479|https://issues.apache.org/jira/browse/HADOOP-13479] will do 
some preparation and improvements before release.)

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13479) Aliyun OSS phase I: some preparation and improvements before release

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen resolved HADOOP-13479.
---
Resolution: Duplicate

> Aliyun OSS phase I: some preparation and improvements before release
> 
>
> Key: HADOOP-13479
> URL: https://issues.apache.org/jira/browse/HADOOP-13479
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Status: Patch Available  (was: In Progress)

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% of the time
> Unsynchronized access at 

[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13494:
---
Status: Patch Available  (was: Open)

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421094#comment-15421094
 ] 

Hadoop QA commented on HADOOP-13494:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 4 unchanged - 1 fixed = 12 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Write to static field 
org.apache.hadoop.conf.ConfigRedactor.compiledPatterns from instance method new 
org.apache.hadoop.conf.ConfigRedactor()  At ConfigRedactor.java:from instance 
method new org.apache.hadoop.conf.ConfigRedactor()  At 
ConfigRedactor.java:[line 46] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823535/HADOOP-13494.001.patch
 |
| JIRA Issue | HADOOP-13494 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cfd55624f234 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d677b68 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10247/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10247/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10247/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422110#comment-15422110
 ] 

shimingfei commented on HADOOP-13491:
-

[~uncleGen] Thanks for your patch!
1. There are still 8 warnings from findbugs
2. multipartUploadObject demands that the sematic of skipFully should be 
skipping exactly n bytes from input stream, instead of skipping at most n 
bytes. Or the data uploaded maybe not right.
3. The temporary file should be deleted after uploading to OSS.
{code}
-try {
-  if (dataLen <= partSizeThreshold) {
-uploadObject();
-  } else {
-multipartUploadObject();
-  }
-} finally {
-  tmpFile.delete();
+if (dataLen <= partSizeThreshold) {
+  uploadObject();
+} else {
+  multipartUploadObject();
 }
{code}

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized 

[jira] [Updated] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky

2016-08-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13470:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}}, {{branch-2}} and {{branch-2.8}}. Thanks [~jnp] for 
review.

> GenericTestUtils$LogCapturer is flaky
> -
>
> Key: HADOOP-13470
> URL: https://issues.apache.org/jira/browse/HADOOP-13470
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, util
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>  Labels: reviewed
> Fix For: 2.8.0
>
> Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch
>
>
> {{GenericTestUtils$LogCapturer}} is useful for assertions against service 
> logs. However it should be fixed in following aspects:
> # In the constructor, it uses the stdout appender's layout.
> {code}
> Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
> {code}
> However, the stdout appender may be named "console" or alike which makes the 
> constructor throw NPE. Actually the layout does not matter and we can use a 
> default pattern layout that only captures application logs.
> # {{stopCapturing()}} method is not working. The major reason is that the 
> {{appender}} internal variable is never assigned and thus removing it to stop 
> capturing makes no sense.
> # It does not support {{org.slf4j.Logger}} which is preferred to log4j in 
> many modules.
> # There is no unit test for it.
> This jira is to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422146#comment-15422146
 ] 

Hudson commented on HADOOP-13470:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10280 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10280/])
HADOOP-13470. GenericTestUtils$LogCapturer is flaky. (Contributed by (liuml07: 
rev 9336a0495f99cd3fbc7ecef452eb37cfbaf57440)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java


> GenericTestUtils$LogCapturer is flaky
> -
>
> Key: HADOOP-13470
> URL: https://issues.apache.org/jira/browse/HADOOP-13470
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test, util
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>  Labels: reviewed
> Fix For: 2.8.0
>
> Attachments: HADOOP-13470.000.patch, HADOOP-13470.001.patch
>
>
> {{GenericTestUtils$LogCapturer}} is useful for assertions against service 
> logs. However it should be fixed in following aspects:
> # In the constructor, it uses the stdout appender's layout.
> {code}
> Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
> {code}
> However, the stdout appender may be named "console" or alike which makes the 
> constructor throw NPE. Actually the layout does not matter and we can use a 
> default pattern layout that only captures application logs.
> # {{stopCapturing()}} method is not working. The major reason is that the 
> {{appender}} internal variable is never assigned and thus removing it to stop 
> capturing makes no sense.
> # It does not support {{org.slf4j.Logger}} which is preferred to log4j in 
> many modules.
> # There is no unit test for it.
> This jira is to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-15 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422176#comment-15422176
 ] 

Masatake Iwasaki commented on HADOOP-13419:
---

Though the patch is applicable, there are additional javadoc warnings in 
branch-2. Are you willing to update the patch for branch-2, [~lewuathe]?

{noformat}
[WARNING] 
/home/iwasakims/srcs/hadoop-branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java:63:
 warning - @param argument "serviceAddr" is not a parameter name.
[WARNING] 
/home/iwasakims/srcs/hadoop-branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java:162:
 warning - Tag @see cannot be used in inline documentation.  It can only be 
used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
[WARNING] 
/home/iwasakims/srcs/hadoop-branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java:162:
 warning - Tag @see cannot be used in inline documentation.  It can only be 
used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
[WARNING] 
/home/iwasakims/srcs/hadoop-branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:2448:
 warning - Tag @link: can't find 
registerProtocolEngine(RpcPayloadHeader.RpcKind,
[WARNING] Class, RPC.RpcInvoker) in org.apache.hadoop.ipc.Server
[WARNING] 
/home/iwasakims/srcs/hadoop-branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:2760:
 warning - Tag @link: can't find call(RpcPayloadHeader.RpcKind, String,
[WARNING] Writable, long) in org.apache.hadoop.ipc.Server
{noformat}


> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch, HADOOP-13419.02.patch, 
> HADOOP-13419.03.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422175#comment-15422175
 ] 

Hudson commented on HADOOP-13419:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10281 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10281/])
HADOOP-13419. Fix javadoc warnings by JDK8 in hadoop-common package. 
(iwasakims: rev b8a446ba57d89c0896ec2d56dd919b0101e69f44)
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/package.html
* (delete) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package.html
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/package-info.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/package-info.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch, HADOOP-13419.02.patch, 
> HADOOP-13419.03.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422124#comment-15422124
 ] 

shimingfei commented on HADOOP-13491:
-

Got it!

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% 

[jira] [Commented] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-08-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422171#comment-15422171
 ] 

Chris Nauroth commented on HADOOP-13252:


The patch looks good.  I have just a few minor comments.

In core-default.xml, please mention that the list of credentials provider 
classes is comma-separated.

Please add visibility/stability annotations to {{AWSCredentialProviderList}}.

{code}
which integrate with the AWS SDK by implementing the 
`om.amazonaws.auth.AWSCredentialsProvider`.
{code}

Typo in class name.

{code}
1. Alowing anonymous access to an S3 bucket compromises
{code}

Typo: "Allowing"

{code}
from placing its declaration on the commant line.
{code}

Typo: "command"


> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch, 
> HADOOP-13252-branch-2-003.patch, HADOOP-13252-branch-2-004.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-08-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reopened HADOOP-13319:


> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-08-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13319:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I'm resolving this as a duplicate of HADOOP-13252.  I think that one will be 
ready to commit soon.

> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13319) S3A to list InstanceProfileCredentialsProvider after EnvironmentVariableCredentialsProvider

2016-08-15 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-13319.

Resolution: Duplicate

> S3A to list InstanceProfileCredentialsProvider after 
> EnvironmentVariableCredentialsProvider
> ---
>
> Key: HADOOP-13319
> URL: https://issues.apache.org/jira/browse/HADOOP-13319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13252-branch-2-001.patch
>
>
> S3A now has a default list of credential providers to pick up AWS credentials 
> from
> The environment variable provider added in HADOOP-12807 should go before the 
> {{InstanceProfileCredentialsProvider}} one in the list, as it does a simple 
> env var checkup. In contrast  {{InstanceProfileCredentialsProvider}} does an 
> HTTP request *even when not running on EC2*. It may block for up to 2s to 
> await a timeout, and network problems could take longer.
> Checking env vars is a low cost operation that shouldn't have to wait for a 
> network timeout before being picked up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422124#comment-15422124
 ] 

shimingfei edited comment on HADOOP-13491 at 8/16/16 4:23 AM:
--

Got it!
But the deleteOnExit will be only triggered when JVM exists, and it will causes 
huge disk occupation for long run jobs


was (Author: shimingfei):
Got it!

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug 

[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-15 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422152#comment-15422152
 ] 

Masatake Iwasaki commented on HADOOP-13419:
---

+1

> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch, HADOOP-13419.02.patch, 
> HADOOP-13419.03.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422119#comment-15422119
 ] 

uncleGen commented on HADOOP-13491:
---

Hi, [~shimingfei]

1. the 8 warnings come from original feature branch (HADOOP-12756). After 
applied patch, the 8 warnings were fixed
2. aggree with you.
3. the temporary file is created with 'deleteOnExit', so i remove the 
duplicated 'delete'

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; 

[jira] [Comment Edited] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422119#comment-15422119
 ] 

uncleGen edited comment on HADOOP-13491 at 8/16/16 3:48 AM:


Hi, [~shimingfei]

1. the 8 warnings come from original feature branch (HADOOP-12756). After 
applied patch, the 8 warnings were fixed
2. aggree with you.
3. the temporary file is created with 'deleteOnExit', so i removed the 
duplicated final 'delete' operation 


was (Author: unclegen):
Hi, [~shimingfei]

1. the 8 warnings come from original feature branch (HADOOP-12756). After 
applied patch, the 8 warnings were fixed
2. aggree with you.
3. the temporary file is created with 'deleteOnExit', so i remove the 
duplicated 'delete'

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized 

[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Attachment: HADOOP-13491-HADOOP-12756.001.patch

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% of the time
> Unsynchronized access at 

[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421143#comment-15421143
 ] 

Hadoop QA commented on HADOOP-13491:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
40s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  8s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-tools/hadoop-aliyun generated 0 new + 0 
unchanged - 8 fixed = 0 total (was 8) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-tools_hadoop-aliyun generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823706/HADOOP-13491-HADOOP-12756.001.patch
 |
| JIRA Issue | HADOOP-13491 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b38fd0476456 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 8346f922 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10248/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10248/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10248/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10248/testReport/ |
| 

[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421199#comment-15421199
 ] 

Wei-Chiu Chuang commented on HADOOP-12765:
--

Hello [~mshen] thanks for updating the patch! Overall looks good to me. I 
noticed the new method you added {{createHttpsChannelConnector}} has some 
duplication with {{createDefaultChannelConnector}}. Can you please de-duplicate 
the code if feasible?

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.002.patch, blocking_1.png, blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13494:
---
Attachment: HADOOP-13494.002.patch

Addressing finbugs and checkstyle concerns

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421416#comment-15421416
 ] 

Hadoop QA commented on HADOOP-13494:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 4 unchanged - 1 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823716/HADOOP-13494.002.patch
 |
| JIRA Issue | HADOOP-13494 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux adaec842cab8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9f29f42 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10249/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10249/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10249/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> 

[jira] [Work started] (HADOOP-13450) S3Guard: Implement access policy providing strong consistency with S3 as source of truth.

2016-08-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13450 started by Lei (Eddy) Xu.
--
> S3Guard: Implement access policy providing strong consistency with S3 as 
> source of truth.
> -
>
> Key: HADOOP-13450
> URL: https://issues.apache.org/jira/browse/HADOOP-13450
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Lei (Eddy) Xu
>
> Implement an S3A access policy that provides strong consistency by 
> cross-checking with the consistent metadata store, but still using S3 as the 
> the source of truth.  This access policy will be well suited to users who 
> want an improved consistency guarantee but also want the freedom to load data 
> into the bucket using external tools that don't integrate with the metadata 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-15 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421430#comment-15421430
 ] 

Xiaobing Zhou commented on HADOOP-13488:


I posted the patch v002 by adding {code}if (isEqual(thisPolicy, thatPolicy)) 
{{code} to ConnectionId#reuseConnection to maintain compatibility. Fixed some 
tests.

> Have TryOnceThenFail implement ConnectionRetryPolicy
> 
>
> Key: HADOOP-13488
> URL: https://issues.apache.org/jira/browse/HADOOP-13488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13488.000.patch, HADOOP-13488.001.patch
>
>
> As the most commonly used default or fallback policy, TryOnceThenFail is 
> often used both RetryInvocationHandler and connection level. As proposed in 
> HADOOP-13436, it should implement ConnectionRetryPolicy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-13450) S3Guard: Implement access policy providing strong consistency with S3 as source of truth.

2016-08-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13450 stopped by Lei (Eddy) Xu.
--
> S3Guard: Implement access policy providing strong consistency with S3 as 
> source of truth.
> -
>
> Key: HADOOP-13450
> URL: https://issues.apache.org/jira/browse/HADOOP-13450
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Lei (Eddy) Xu
>
> Implement an S3A access policy that provides strong consistency by 
> cross-checking with the consistent metadata store, but still using S3 as the 
> the source of truth.  This access policy will be well suited to users who 
> want an improved consistency guarantee but also want the freedom to load data 
> into the bucket using external tools that don't integrate with the metadata 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421342#comment-15421342
 ] 

Arun Suresh commented on HADOOP-13437:
--

bq. ..Is this an expected behavior (so we need to keep compatible behavior), or 
is this a bug (so we can fix it here)? Thanks in advance...
IIRC, this is actually expected behavior. This way, the default and whitelists 
are specified only once at startup, based on some deployment policy. New 
KeyACLs for individual users/groups and keys can be added / removed as users / 
keys are created.

bq. After the replacement (suppose there was no backup), how could the admin 
figure out what exactly the whitelist/defaults are?
I feel this outside the scope of what KMS should worry about (Or we should 
build config management features that supports stuff like rollback etc. into 
KMS). The deployment environment / admin should ensure backups of the files are 
maintained.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-15 Thread Min Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Shen updated HADOOP-12765:
--
Attachment: HADOOP-12765.003.patch

Attaching revised patch to address [~jojochuang]'s comment.

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: HADOOP-12765.001.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.002.patch, HADOOP-12765.003.patch, blocking_1.png, 
> blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-15 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13488:
---
Attachment: HADOOP-13488.001.patch

> Have TryOnceThenFail implement ConnectionRetryPolicy
> 
>
> Key: HADOOP-13488
> URL: https://issues.apache.org/jira/browse/HADOOP-13488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13488.000.patch, HADOOP-13488.001.patch
>
>
> As the most commonly used default or fallback policy, TryOnceThenFail is 
> often used both RetryInvocationHandler and connection level. As proposed in 
> HADOOP-13436, it should implement ConnectionRetryPolicy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421498#comment-15421498
 ] 

Hadoop QA commented on HADOOP-13488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 163 unchanged - 4 fixed = 164 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823729/HADOOP-13488.001.patch
 |
| JIRA Issue | HADOOP-13488 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 803aa9ba9f7b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 83e57e0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10251/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Have 

[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421513#comment-15421513
 ] 

Hadoop QA commented on HADOOP-12765:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} root: The patch generated 0 new + 81 unchanged - 1 
fixed = 81 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823721/HADOOP-12765.003.patch
 |
| JIRA Issue | HADOOP-12765 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 2f491993de3c 

[jira] [Commented] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422187#comment-15422187
 ] 

Xiao Chen commented on HADOOP-13487:


Thanks Alex for the details!

I confirm this is a bug in {{ZKDelegationTokenSecretManager}}, and the root 
cause is that when KMS is restarting, it's not actively loading existing znodes 
into its cache. Hence if the token is never accessed (e.g. after a cluster-wise 
restart), the znode is not managed by KMS, and eventually leaked.

Obviously we can't have any logic depending on a KMS stop, since when zookeeper 
is used we're supposed to have multiple KMS instances.

I can think of several options on fixing this:
# Always load up existing znodes to cache. This would be straightforward but 
may harm startup time.
# Have another background thread to periodically check znodes and remove 
expired ones.
# Have another process to do #2, so that we don't have to waste resource on 
multiple KMS instances to do the same clean up work.

I'm thinking of having a modified #1. Specifically, on KMS restart, fire up a 
thread to get the znodes, and then iterate through it to remove the expired 
tokens. We can set a random delay on this background task after startup, to 
prevent multiple KMS instances racing on the same clean up work.

> Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper
> -
>
> Key: HADOOP-13487
> URL: https://issues.apache.org/jira/browse/HADOOP-13487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Alex Ivanov
>
> Configuration:
> CDH 5.5.1 (Hadoop 2.6+)
> KMS configured to store delegation tokens in Zookeeper
> DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties
> Findings:
> It seems to me delegation tokens never get cleaned up from Zookeeper past 
> their renewal date. I can see in the logs that the removal thread is started 
> with the expected interval:
> {code}
> 2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> {code}
> However, I don't see any delegation token removals, indicated by the 
> following log message:
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager 
> --> removeStoredToken(TokenIdent ident), line 769 [CDH]
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Removing ZKDTSMDelegationToken_"
>   + ident.getSequenceNumber());
> }
> {code}
> Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't 
> get cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-15 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422185#comment-15422185
 ] 

Kai Sasaki commented on HADOOP-13419:
-

[~iwasakims] I see. I'll create another patch for branch-2. Thanks for checking!

> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch, HADOOP-13419.02.patch, 
> HADOOP-13419.03.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13494:
-
Component/s: security

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13494:
-
Target Version/s: 2.6.5, 2.7.4

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13494:
-
Affects Version/s: 2.2.0

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421693#comment-15421693
 ] 

Andrew Wang commented on HADOOP-13437:
--

Hey [~asuresh], could you go into a little more detail as to why this behavior 
is desirable? I agree with Xiao on it making things more complicated regarding 
the whitelist/defaults and ACL deployment generally.

FWIW the KMS docs just say "This file is hot-reloaded when it changes" without 
mention of it applying to some types of ACLs and not others.

One patch review comment, could we change the "Should not configure..." log 
warn to say something like "Invalid {} ACL for KEY_OP {}, ignoring"? I think 
that's more clear. Otherwise the intent looks good to me.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13333) testConf.xml ls comparators in wrong order

2016-08-15 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-1:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> testConf.xml ls comparators in wrong order
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Vrushali C
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-1.01.patch
>
>
> HADOOP-13079 updated file 
> {{hadoop-common-project/hadoop-common/src/test/resources/testConf.xml}} 
> incorrectly by inserting a new comparator between 2 comparators for {{option 
> -h}}:
> {code:xml}
> 
>   RegexpComparator
>   ^\s*-h\s+Formats the sizes of files in a 
> human-readable fashion( )*
> 
> 
>   RegexpComparator
>   ^\s*-q\s+Print \? instead of non-printable 
> characters\.( )*
> 
> 
>   RegexpComparator
>   ^\s*rather than a number of bytes\.( 
> )*
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421618#comment-15421618
 ] 

Chris Nauroth commented on HADOOP-13447:


[~ste...@apache.org], this is another one that I'd like us to consider taking 
straight to trunk and branch-2, similar to HADOOP-13446.  I think HADOOP-13252 
should be committed first though, so that the refactoring work here doesn't 
stomp on your credential provider chain work.

> S3Guard: Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> --
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13333) testConf.xml ls comparators in wrong order

2016-08-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421729#comment-15421729
 ] 

Varun Saxena commented on HADOOP-1:
---

Committed to trunk, branch-2 and branch-2.8
Thanks Vrushali for your contribution and John Zhuge for the reviews.

> testConf.xml ls comparators in wrong order
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Vrushali C
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-1.01.patch
>
>
> HADOOP-13079 updated file 
> {{hadoop-common-project/hadoop-common/src/test/resources/testConf.xml}} 
> incorrectly by inserting a new comparator between 2 comparators for {{option 
> -h}}:
> {code:xml}
> 
>   RegexpComparator
>   ^\s*-h\s+Formats the sizes of files in a 
> human-readable fashion( )*
> 
> 
>   RegexpComparator
>   ^\s*-q\s+Print \? instead of non-printable 
> characters\.( )*
> 
> 
>   RegexpComparator
>   ^\s*rather than a number of bytes\.( 
> )*
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421918#comment-15421918
 ] 

Xiao Chen commented on HADOOP-13437:


Thanks for the review and discussions, Arun and Andrew.

Personally I would prefer #2, since I think this would minimize user surprise, 
and make things easier to maintain, both for developers and admins. This way, 
we can treat this as a bug fix, and put it into 2.x. #1 will be incompatible so 
we can't do anything until 3.0. m2c.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421923#comment-15421923
 ] 

Hadoop QA commented on HADOOP-13494:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 115 unchanged - 1 fixed = 116 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823781/HADOOP-13494.003.patch
 |
| JIRA Issue | HADOOP-13494 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 16989ba04c2c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 864f878 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10253/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10253/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10253/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: 

[jira] [Commented] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-08-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421944#comment-15421944
 ] 

Chris Nauroth commented on HADOOP-13208:


[~ste...@apache.org], thank you for the patch.  I'm still working through some 
of the logic in detail, but this looks great overall.  I have a few questions 
and comments so far.

The control flow across the various iterators is a bit hard to follow.  I think 
this is an unavoidable consequence of the optimized logic, but I'd like to 
suggest some things that might make it easier to follow.

# I notice that all call sites first contruct an {{ObjectListingIterator}} and 
then immediately pass it into construction of a {{FileStatusListingIterator}}.  
Since the two classes cannot be used independently in a meaningful way, I 
wonder if it would be better to combine all of the logic into a single class.  
If not combined, then perhaps it would be helpful to have a 
{{createFileStatusListingIterator}} helper method to encapsulate the 2-step 
construction, and all call sites could call that.
# Could the first listing call be pushed into the {{ObjectListingIterator}} 
constructor?  That would have the benefit of the iterator fully encapsulating 
all of the calls so that each call site doesn't need to bootstrap it with the 
first listing call.  If the constructor also accepted a {{recursive}} flag, 
then it could take care of deciding whether or not to pass a "/" delimiter: 
another detail that call sites wouldn't need to worry about getting right.
# It appears that any reference to a {{FileStatusGenerator}} is always going to 
point to an instance of a single implementation: {{GenerateS3AFileStatus}}.  Is 
there a reason for the extra indirection, or can the logic of 
{{GenerateS3AFileStatus}} be inlined to the call sites?  (Maybe the indirection 
sets up something helpful in a subsequent patch that I haven't reviewed yet?)
# I could see moving the new inner classes to top-level package-private 
classes, assuming that wouldn't force relaxing visibility on {{S3AFileSystem}} 
internals too much.

{code}
   * This implementation is optimized for S3, which can do a bulk listing
   * off all entries under a path in one single operation. Thus there is
   * no need to recursively walk the directory tree.
{code}

This comment on {{listLocatedStatus}} confused me, because this method does not 
recursively traverse the sub-tree.  As I understand it, this patch does not 
reduce the number of S3 calls during a {{listLocatedStatus}} operation, 
assuming that the caller fully exhausts the returned iterator.  However, it's 
still a good change, because if the caller breaks out of the iteration early, 
then they don't pay the full cost of an eager {{listStatus}} fetch (multiple S3 
calls) that the base class {{FileSystem#listLocatedStatus}} would do.

Do I understand correctly, or did I miss something recursive about this 
operation?

A few nitpicks:

{code}
   * @return the fully qualified path including URI schema and bucket name.
{code}

Should be "URI scheme"?

{code}
   * Make this protected method public so that {@link S3AGlobber can access it}.
...
   * Override superclass to use the new {@code S3AGlobber}.
{code}

Since the globber stuff is going to be done in a different patch, I suggest 
omitting these changes from this patch.

{code}
   * This essentially a nested and wrapped set of iterators, with some
{code}

Should be "is essentially"?

{code}
builder.append("size=").append(summary.getSize()).append(" ");
return builder.toString();
{code}

The last {{append}} of a trailing space looks unnecessary.

{code}
  result = new ArrayList<>(1);
{code}

I agree with Aaron's earlier comment about returning an array directly here and 
only allocating an {{ArrayList}} on the is-directory code path.



> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, 
> HADOOP-13208-branch-2-009.patch, HADOOP-13208-branch-2-010.patch, 
> HADOOP-13208-branch-2-011.patch, HADOOP-13208-branch-2-012.patch, 
> HADOOP-13208-branch-2-017.patch, HADOOP-13208-branch-2-018.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes 

[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421885#comment-15421885
 ] 

Arun Suresh commented on HADOOP-13437:
--

So, the original intent was based on the assumption that defaults and 
whitelists should not be changed once KMS is started.. I agree it should have 
been doc-ed. (On startup, all KeyOpType defaults and whitelists would be 
specified.. and no new defaults and whitelists would be added... which is 
reason for the {{containsKey}})
The {{kms-acls.xml}} hot reloading was meant to be used in the event keys and 
associated users/groups were added/removed, not for modifying the defaults and 
whitelists.
I agree hot reloading of these would make things more flexible though. I guess 
we should either:
# move defaults and whitelists to {{kms-site.xml}} and thereby ensure these are 
unambiguously NOT hot reloadable.
# as per this JIRA remove the restriction and allow everything to be hot 
reloadable.

Thoughts?
 


> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421932#comment-15421932
 ] 

Andrew Wang commented on HADOOP-13437:
--

Thanks for commenting Arun. I like #2 as well, +1 for the most recent patch 
also. Overall I don't see a downside to making everything configurable, we can 
advise users about the recommended way to use the ACLs via documentation.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421941#comment-15421941
 ] 

Arun Suresh commented on HADOOP-13437:
--

Agreed. We can always re-look at this later if there is a strong case to keep 
this behavior.
+1 from me as well.

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421949#comment-15421949
 ] 

Andrew Wang commented on HADOOP-13494:
--

Thanks for the rev Sean, a few more comments:

* Normally you want to pass in the Configuration to the constructor, since 
making a Configuration is pretty heavy-weight (all the parsing and deprecation 
logic), and it also makes unit testing easier.
* The tricky bit I was alluding to was how when we have both {{oldConf}} and 
{{newConf}}, they can have different redaction patterns configured. I think the 
right behavior here is to respect the redaction settings in the selfsame 
config. This also relates to passing in the Configuration to the constructor.
* I think "password" might also be too broad of a pattern, since it catches 
okay keys like "hadoop.security.credstore.java-keystore-provider.password-file".

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, 
> HADOOP-13494.003.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.

2016-08-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421607#comment-15421607
 ] 

Chris Nauroth commented on HADOOP-13446:


[~ste...@apache.org], I just scanned everything that is Patch Available now.  
I'd suggest that HADOOP-13208 needs to get committed ahead of this one, because 
it has some significant test code changes.  Aside from that, I don't think any 
of the others must be committed first.  Everything else looks like it would be 
trivial to rebase after committing HADOOP-13446.  Is there anything else that 
you think must be committed ahead of this one?

I handled most of this patch by scripting {{git mv}} and {{sed}} calls.  I'd 
even volunteer to do rebasing of others' patches myself using the same 
scripting if that helps.

> S3Guard: Support running isolated unit tests separate from AWS integration 
> tests.
> -
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421644#comment-15421644
 ] 

Andrew Wang commented on HADOOP-13494:
--

Thanks for working on this Sean, this is a pretty concerning bug, fix looks 
good overall. A few review comments:

ConfigRedactor:
* Class javadoc's first sentence should end with a period. Also needs a "" 
tag if you want to linebreak the paragraph.
* How do you feel about making the list of regexes themselves configurable? 
Users can put whatever keys they want into their Configuration (which might 
also be sensitive), so ideally redaction also handles this case. It makes the 
logic in ReconfigurableBase a little more complicated though, since we'll need 
per-Configuration redactors.

I also did a quick grep for "password" in DFSConfigKeys and turned up a few, we 
should consider redacting those as well.

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13333) testConf.xml ls comparators in wrong order

2016-08-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421733#comment-15421733
 ] 

Vrushali C commented on HADOOP-1:
-

Thank you [~varun_saxena] and [~jzhuge]!

> testConf.xml ls comparators in wrong order
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Vrushali C
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-1.01.patch
>
>
> HADOOP-13079 updated file 
> {{hadoop-common-project/hadoop-common/src/test/resources/testConf.xml}} 
> incorrectly by inserting a new comparator between 2 comparators for {{option 
> -h}}:
> {code:xml}
> 
>   RegexpComparator
>   ^\s*-h\s+Formats the sizes of files in a 
> human-readable fashion( )*
> 
> 
>   RegexpComparator
>   ^\s*-q\s+Print \? instead of non-printable 
> characters\.( )*
> 
> 
>   RegexpComparator
>   ^\s*rather than a number of bytes\.( 
> )*
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13437:
---
Attachment: HADOOP-13437.05.patch

Attaching patch 5 to update the log message, and added lines in the test to 
verify {{ALL}} is not parsed for whitelist/default. (Log message says invalid 
key_op for xxx_acl, since I think that's more accurate.)

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13333) testConf.xml ls comparators in wrong order

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421756#comment-15421756
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10275 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10275/])
HADOOP-1. testConf.xml ls comparators in wrong order (Vrushali C via 
(varunsaxena: rev d714030b5d4124f307c09d716d72a9f5a4a25995)
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> testConf.xml ls comparators in wrong order
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Vrushali C
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-1.01.patch
>
>
> HADOOP-13079 updated file 
> {{hadoop-common-project/hadoop-common/src/test/resources/testConf.xml}} 
> incorrectly by inserting a new comparator between 2 comparators for {{option 
> -h}}:
> {code:xml}
> 
>   RegexpComparator
>   ^\s*-h\s+Formats the sizes of files in a 
> human-readable fashion( )*
> 
> 
>   RegexpComparator
>   ^\s*-q\s+Print \? instead of non-printable 
> characters\.( )*
> 
> 
>   RegexpComparator
>   ^\s*rather than a number of bytes\.( 
> )*
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421785#comment-15421785
 ] 

Hadoop QA commented on HADOOP-13437:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-common-project/hadoop-kms: The patch 
generated 0 new + 7 unchanged - 9 fixed = 7 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823768/HADOOP-13437.05.patch 
|
| JIRA Issue | HADOOP-13437 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d843892d6bdf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 03dea65 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10252/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10252/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, 

[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.

2016-08-15 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421814#comment-15421814
 ] 

Aaron Fabbri commented on HADOOP-13446:
---

+1 getting this stuff merged and rebasing the feature branch.  Shout if I can 
help.

> S3Guard: Support running isolated unit tests separate from AWS integration 
> tests.
> -
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13494:
---
Attachment: HADOOP-13494.003.patch

Thanks for the review.

Fixed Javadoc and a couple of other checkstyle issues. Also made the list 
configurable, simplified some of the defaults to improve readability without 
practically expanding what would get matched, and added 'password' to the list 
(to catch the ssl.*password properties you saw in DFSConfigKeys.java)

{quote}It makes the logic in ReconfigurableBase a little more complicated 
though, since we'll need per-Configuration redactors.{quote}

I'm not entirely sure I caught your meaning here. I had the list just get 
parsed from the property, with commas to separate expressions, and then used 
the list as before. Did you have something else in mind?

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, 
> HADOOP-13494.003.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Attachment: (was: HADOOP-13491-HADOOP-12756.001.patch)

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% of the time
> Unsynchronized 

[jira] [Updated] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13437:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks much, Andrew and Arun!

> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Attachment: HADOOP-13491-HADOOP-12756.001.patch

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% of the time
> Unsynchronized access at 

[jira] [Updated] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13491:
--
Attachment: HADOOP-13491-HADOOP-12756.002.patch

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream; locked 
> 85% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.wrappedStream
> Synchronized 85% 

[jira] [Updated] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13494:
---
Attachment: HADOOP-13494.004.patch

Aahh I understand now. Passing in the configuration object now, and anchoring 
password to the end of the key - should be a much safer heuristic. Also in this 
patch, separate redactors for the old and new config. I originally thought it 
safer to go with the new config only, because none of the sensitive properties 
are currently reconfigurable. I don't really see it being a common case where 
you change your mind to stop redacting certain configs. If you add a property 
to redact, the old value still gets logged. I see your argument too though, so 
no strong opinions here. Thoughts?

> ReconfigurableBase can log sensitive information
> 
>
> Key: HADOOP-13494
> URL: https://issues.apache.org/jira/browse/HADOOP-13494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13494.001.patch, HADOOP-13494.002.patch, 
> HADOOP-13494.003.patch, HADOOP-13494.004.patch
>
>
> ReconfigurableBase will log old and new configuration values, which may cause 
> sensitive parameters (most notably cloud storage keys, though there may be 
> other instances) to get included in the logs. 
> Given the currently small list of reconfigurable properties, an argument 
> could be made for simply not logging the property values at all, but this is 
> not the only instance where potentially sensitive configuration gets written 
> somewhere else in plaintext. I think a generic mechanism for redacting 
> sensitive information for textual display will be useful to some of the web 
> UIs too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13482) Provide hadoop-aliyun oss configuration documents

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13482 started by uncleGen.
-
> Provide hadoop-aliyun oss configuration documents
> -
>
> Key: HADOOP-13482
> URL: https://issues.apache.org/jira/browse/HADOOP-13482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13481) User end documents for Aliyun OSS FileSystem

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13481 started by uncleGen.
-
> User end documents for Aliyun OSS FileSystem
> 
>
> Key: HADOOP-13481
> URL: https://issues.apache.org/jira/browse/HADOOP-13481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13482) Provide hadoop-aliyun oss configuration documents

2016-08-15 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen resolved HADOOP-13482.
---
Resolution: Duplicate

> Provide hadoop-aliyun oss configuration documents
> -
>
> Key: HADOOP-13482
> URL: https://issues.apache.org/jira/browse/HADOOP-13482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422024#comment-15422024
 ] 

Hadoop QA commented on HADOOP-13491:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
36s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-tools/hadoop-aliyun generated 0 new + 0 
unchanged - 8 fixed = 0 total (was 8) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823802/HADOOP-13491-HADOOP-12756.002.patch
 |
| JIRA Issue | HADOOP-13491 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 238f98d22757 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 8346f922 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10254/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10254/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10254/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: 

[jira] [Commented] (HADOOP-13437) KMS should reload whitelist and default key ACLs when hot-reloading

2016-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422025#comment-15422025
 ] 

Hudson commented on HADOOP-13437:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10277 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10277/])
HADOOP-13437. KMS should reload whitelist and default key ACLs when (xiao: rev 
9daa9979a1f92fb3230361c10ddfcc1633795c0e)
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java


> KMS should reload whitelist and default key ACLs when hot-reloading
> ---
>
> Key: HADOOP-13437
> URL: https://issues.apache.org/jira/browse/HADOOP-13437
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13437.01.patch, HADOOP-13437.02.patch, 
> HADOOP-13437.03.patch, HADOOP-13437.04.patch, HADOOP-13437.05.patch
>
>
> When hot-reloading, {{KMSACLs#setKeyACLs}} ignores whitelist and default key 
> entries if they're present in memory.
> We should reload them, hot-reload and cold-start should not have any 
> difference in behavior.
> Credit to [~dilaver] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-15 Thread Dushyanth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422050#comment-15422050
 ] 

Dushyanth commented on HADOOP-13475:


[~Raulmsm] Thank you very much for the change.

I see that there are lot of diffs in the patch that are getting introduced 
because of probably different formatter being used. Kindly clean those up.

Inside retrieveAppendStream(..) method probably it is a good idea to move the 
block blob handling into a else statement following the same pattern as other 
similar methods in the class.

Considering AppendBlobs unlike BlockBlobs would be writing data onto the actual 
blobs should the output stream be supporting flush() api i.e should it 
implement syncable?

I dont see any implementation for the interface : CloudBlockAppendWrapper nor 
is the interface extended in the patch. Am I missing something?


> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Raul da Silva Martins
>Priority: Critical
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to start writing to Append blobs, we need this support in 
> order to be able to keep using our HDI capabilities.
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13494) ReconfigurableBase can log sensitive information

2016-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422054#comment-15422054
 ] 

Hadoop QA commented on HADOOP-13494:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 115 unchanged - 1 fixed = 116 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823806/HADOOP-13494.004.patch
 |
| JIRA Issue | HADOOP-13494 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 35011672dd59 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9daa997 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10255/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10255/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10255/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10255/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Commented] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md

2016-08-15 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422098#comment-15422098
 ] 

Yuanbo Liu commented on HADOOP-13497:
-

[~iwasakims] I tag you in this loop and hope to get your thoughts if you don't 
mind. Thanks in advance.

> fix wrong command in CredentialProviderAPI.md
> -
>
> Key: HADOOP-13497
> URL: https://issues.apache.org/jira/browse/HADOOP-13497
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Trivial
> Attachments: HADOOP-13497.001.patch
>
>
> In CredentialProviderAPI.md line 122
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password 
> jceks://file/tmp/test.jceks`
> {quote}
> should be
> {quote}
> Example: `hadoop credential create ssl.server.keystore.password -provider 
> jceks://file/tmp/test.jceks`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13491) fix several warnings from findbugs

2016-08-15 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422124#comment-15422124
 ] 

shimingfei edited comment on HADOOP-13491 at 8/16/16 4:26 AM:
--

Got it!
But the deleteOnExit will be only triggered when JVM exits, and it will causes 
huge disk occupation for long run jobs


was (Author: shimingfei):
Got it!
But the deleteOnExit will be only triggered when JVM exists, and it will causes 
huge disk occupation for long run jobs

> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 114]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 130]
> Synchronized access at AliyunOSSInputStream.java:[line 259]
> Synchronized access at AliyunOSSInputStream.java:[line 266]
> ISInconsistent