[jira] [Commented] (HADOOP-12633) Extend Erasure Code to support POWER Chip acceleration

2016-03-07 Thread tongxiaojun (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184507#comment-15184507
 ] 

tongxiaojun commented on HADOOP-12633:
--

RedHadoop Will Merger To CRH.

> Extend Erasure Code to support POWER Chip acceleration
> --
>
> Key: HADOOP-12633
> URL: https://issues.apache.org/jira/browse/HADOOP-12633
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: wqijun
>Assignee: wqijun
> Attachments: hadoopec-ACC.patch
>
>
> Erasure Code is a very important feature in new HDFS version. This JIRA will 
> focus on how to extend EC to support multiple types of EC acceleration by C 
> library and other hardware method, like GPU or FPGA. Compared with 
> Hadoop-11887, this JIRA will more focus on how to leverage POWER Chip 
> capability to accelerate the EC calculating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184490#comment-15184490
 ] 

Hudson commented on HADOOP-12860:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9442/])
HADOOP-12860. Expand section "Data Encryption on HTTP" in SecureMode (aajisaka: 
rev f86850b544dcb34ee3c9336fad584309e886dbed)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
* hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md


> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Fix For: 2.8.0
>
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch, HADOOP-12860.005.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184478#comment-15184478
 ] 

Hadoop QA commented on HADOOP-12862:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791922/HADOOP-12862.004.patch
 |
| JIRA Issue | HADOOP-12862 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  

[jira] [Commented] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184455#comment-15184455
 ] 

Chris Nauroth commented on HADOOP-12899:


There are no tests, because this is a change in the build process only.  As I 
mentioned earlier, I did manual testing on both Windows and Linux to confirm 
that bundling Snappy works correctly.

> External distribution stitching scripts do not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching and dist-tar-stitching 
> scripts out of hadoop-dist/pom.xml and into external files.  It appears this 
> change is not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12860:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~jojochuang] for the 
contribution!

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Fix For: 2.8.0
>
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch, HADOOP-12860.005.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184435#comment-15184435
 ] 

Akira AJISAKA commented on HADOOP-12860:


+1, checking this in.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch, HADOOP-12860.005.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184431#comment-15184431
 ] 

Hadoop QA commented on HADOOP-12860:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791916/HADOOP-12860.005.patch
 |
| JIRA Issue | HADOOP-12860 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a19407c9a486 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c2140d0 |
| modules | C:  hadoop-common-project/hadoop-common   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site  U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8817/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch, HADOOP-12860.005.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12789) log classpath of ApplicationClassLoader at INFO level

2016-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184425#comment-15184425
 ] 

Hudson commented on HADOOP-12789:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9441/])
HADOOP-12789. log classpath of ApplicationClassLoader at INFO level. (mingma: 
rev 49eedc7ff02ea61764f416f0e2ddf81370aec5fb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java


> log classpath of ApplicationClassLoader at INFO level
> -
>
> Key: HADOOP-12789
> URL: https://issues.apache.org/jira/browse/HADOOP-12789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12789.01.patch
>
>
> Currently {{ApplicationClassLoader}} does not log the classpath at the INFO 
> level although the system classes are logged at that level. Knowing exactly 
> what classpath {{ApplicationClassLoader}} has is a critical piece of 
> information for troubleshooting. We should log it at the INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12633) Extend Erasure Code to support POWER Chip acceleration

2016-03-07 Thread wqijun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wqijun updated HADOOP-12633:

Attachment: hadoopec-ACC.patch

> Extend Erasure Code to support POWER Chip acceleration
> --
>
> Key: HADOOP-12633
> URL: https://issues.apache.org/jira/browse/HADOOP-12633
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: wqijun
>Assignee: wqijun
> Attachments: hadoopec-ACC.patch
>
>
> Erasure Code is a very important feature in new HDFS version. This JIRA will 
> focus on how to extend EC to support multiple types of EC acceleration by C 
> library and other hardware method, like GPU or FPGA. Compared with 
> Hadoop-11887, this JIRA will more focus on how to leverage POWER Chip 
> capability to accelerate the EC calculating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12789) log classpath of ApplicationClassLoader at INFO level

2016-03-07 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HADOOP-12789:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.5
   2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

+1. Committed to trunk, branch-2, branch-2.8, branch-2.7 and branch-2.6. Thanks 
[~sjlee0] for the contribution.

> log classpath of ApplicationClassLoader at INFO level
> -
>
> Key: HADOOP-12789
> URL: https://issues.apache.org/jira/browse/HADOOP-12789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12789.01.patch
>
>
> Currently {{ApplicationClassLoader}} does not log the classpath at the INFO 
> level although the system classes are logged at that level. Knowing exactly 
> what classpath {{ApplicationClassLoader}} has is a critical piece of 
> information for troubleshooting. We should log it at the INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-03-07 Thread Shawn Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184386#comment-15184386
 ] 

Shawn Guo commented on HADOOP-12756:


Hi Guys

Just searched Hadoop Jira list and found this JIRA task about Aliyun OSS & 
Hadoop Integration.
I have a simiar subproject in incubation stage, and have similar functionality 
as in this patch.
@shimingfei, I'm thinking if we could collaborate on this?  do you have a 
email? we could discuss it for details.

Thanks


> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: 0001-OSS-filesystem-integration-with-Hadoop.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Attachment: HADOOP-12862.004.patch

Fixed trivial line width code style issue.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12895) SSLFactory#createSSLSocketFactory exception message is wrong

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12895:
-
Attachment: HADOOP-12895.003.patch

[~andrew.wang] thanks for the suggestion.
Yes indeed this will make it more clear. Attaching the updated patch. Please 
review again, thanks!

> SSLFactory#createSSLSocketFactory exception message is wrong
> 
>
> Key: HADOOP-12895
> URL: https://issues.apache.org/jira/browse/HADOOP-12895
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HADOOP-12895.001.patch, HADOOP-12895.002.patch, 
> HADOOP-12895.003.patch
>
>
> If in SERVER model, the following code should throw exception indicating 
> Factory is in SERVER mode, not in CLIENT mode. Otherwise, it could be 
> confusing.
> {code:title=SSLSocketFactory.java}
> public SSLSocketFactory createSSLSocketFactory()
> throws GeneralSecurityException, IOException {
> if (mode != Mode.CLIENT) {
>   throw new IllegalStateException("Factory is in CLIENT mode");
> }
> return context.getSocketFactory();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Attachment: HADOOP-12860.005.patch

Thanks [~ajisakaa], good catch!
I attached an updated patch to fix these two issues. Please check again!

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch, HADOOP-12860.005.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-03-07 Thread Ling Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ling Zhou updated HADOOP-12756:
---
Attachment: 0001-OSS-filesystem-integration-with-Hadoop.patch

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: 0001-OSS-filesystem-integration-with-Hadoop.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-03-07 Thread Ling Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ling Zhou updated HADOOP-12756:
---
Attachment: (was: 0001-OSS-filesystem-integration-with-Hadoop.patch)

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HCFS User manual.md, OSS integration.pdf, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184187#comment-15184187
 ] 

Akira AJISAKA commented on HADOOP-12860:


Thanks [~jojochuang] for updating the patch! Mostly looks good to me.
{code}
-| `dfs.datanode.https.address` | `0.0.0.0:50470`   
   |





  |
+| `dfs.datanode.https.address` | `0.0.0.0:50470`   
   | HTTPS web UI address for the Data NameNode.   
{code}
* The default port is 50475.
* "Data NameNode" should be "DataNode".

I'm +1 if these are addressed.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12626) Intel ISA-L libraries should be added to the Dockerfile

2016-03-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184172#comment-15184172
 ] 

Kai Zheng commented on HADOOP-12626:


Update: ISA-L has been accepted into Debian unstable 
https://packages.debian.org/sid/libisal2 and you can pull debs from standard 
locations https://packages.debian.org/sid/amd64/libisal2/download

> Intel ISA-L libraries should be added to the Dockerfile
> ---
>
> Key: HADOOP-12626
> URL: https://issues.apache.org/jira/browse/HADOOP-12626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Zheng
>Priority: Blocker
> Attachments: HADOOP-12626-v1.patch
>
>
> HADOOP-11887 added a compile and runtime dependence on the Intel ISA-L 
> library but didn't add it to the Dockerfile so that it could be part of the 
> Docker-based build environment (start-build-env.sh).  This needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184168#comment-15184168
 ] 

Hadoop QA commented on HADOOP-12892:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 53s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 37s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 197m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.hdfs.TestLeaseRecovery2 |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.fs.shell.find.TestName |
|   | hadoop.fs.shell.find.TestIname |
|   | 

[jira] [Commented] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184127#comment-15184127
 ] 

Hadoop QA commented on HADOOP-12899:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} The applied patch generated 0 new + 97 unchanged - 1 
fixed = 97 total (was 98) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-dist in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-dist in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791874/HADOOP-12899.001.patch
 |
| JIRA Issue | HADOOP-12899 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  xml  |
| uname | Linux 3b8f515db3b7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 391da36 |
| 

[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184115#comment-15184115
 ] 

Hadoop QA commented on HADOOP-11820:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 49s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791870/1.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 1efbf39adc16 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 391da36 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8813/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184103#comment-15184103
 ] 

Hadoop QA commented on HADOOP-12886:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 1s {color} 
| {color:red} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 1 new + 737 
unchanged - 1 fixed = 738 total (was 738) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 1 
new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 18s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791384/HADOOP-12886.001.patch
 |
| 

[jira] [Updated] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12899:
---
Description: In HADOOP-12850, we pulled the dist-layout-stitching and 
dist-tar-stitching scripts out of hadoop-dist/pom.xml and into external files.  
It appears this change is not working correctly on Windows.  (was: In 
HADOOP-12850, we pulled the dist-layout-stitching script out of 
hadoop-dist/pom.xml and into an external script.  It appears this change is not 
working correctly on Windows.)

> External distribution stitching scripts do not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching and dist-tar-stitching 
> scripts out of hadoop-dist/pom.xml and into external files.  It appears this 
> change is not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12899:
---
Summary: External distribution stitching scripts do not work correctly on 
Windows.  (was: External dist-layout-stitching script does not work correctly 
on Windows.)

> External distribution stitching scripts do not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12899:
---
Status: Patch Available  (was: Open)

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184081#comment-15184081
 ] 

Chris Nauroth edited comment on HADOOP-12899 at 3/8/16 12:24 AM:
-

I'm attaching patch v001.  This goes back to the strategy of invoking the 
interpreter directly.  I tested this successfully on Windows and Linux.

I also changed the bang lines within the scripts.  Technically, that's not 
really necessary for the fix, but I didn't want people reading it to mistakenly 
think that it's routing through env during the build.


was (Author: cnauroth):
I'm attaching patch v001.  This goes back to the strategy of invoking the 
interpreter directly.  I tested this successfully on Windows and Linux.

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12899:
---
Attachment: HADOOP-12899.001.patch

I'm attaching patch v001.  This goes back to the strategy of invoking the 
interpreter directly.  I tested this successfully on Windows and Linux.

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
> Attachments: HADOOP-12899.001.patch
>
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-03-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184079#comment-15184079
 ] 

Kai Zheng commented on HADOOP-12579:


Note HADOOP-12819 is ready for review. When it's in, I thought all the left 
test migrating work plus the removal of the old engine can be done here in a 
patch in reasonable size. 

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-11820 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789842/Y1.patch |
| JIRA Issue | HADOOP-11820 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8714/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: Y1.patch)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: YARN-3368.patch)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: 1.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184035#comment-15184035
 ] 

Zhe Zhang commented on HADOOP-12886:


Thanks Wei-Chiu. Patch LGTM overall. I just triggered Jenkins. A few minors:
# Empty line change in {{init}} doesn't seem necessary
# "LOG.debug("Disable cipher suite {}.", cipherName);" => disabling?
# Can we have a unit test?

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Attachments: HADOOP-12886.001.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12903) IPC Server should allow suppressing exception logging by type, not log 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12903:
---
Description: 
HADOOP-10597 added support for RPC congestion control by sending retriable 
'server too busy' exceptions to clients. 

However every backoff results in a log message. We've seen these log messages 
slow down the NameNode.
{code}
2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 127.0.0.1 threw exception 
[org.apache.hadoop.ipc.RetriableException: Server is too busy.]
{code}

We already have a metric that tracks the number of backoff events. This log 
message adds nothing useful.

The IPC Server should also allow services to skip logging certain exception 
types altogether.

  was:
HADOOP-10597 added support for RPC congestion control by sending retriable 
'server too busy' exceptions to clients. 

However every backoff results in a log message. We've seen these log messages 
slow down the NameNode.
{code}
2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 127.0.0.1 threw exception 
[org.apache.hadoop.ipc.RetriableException: Server is too busy.]
{code}

We already have a metric that tracks the number of backoff events. This log 
message adds nothing useful.


> IPC Server should allow suppressing exception logging by type, not log 
> 'server too busy' messages
> -
>
> Key: HADOOP-12903
> URL: https://issues.apache.org/jira/browse/HADOOP-12903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12903.01.patch
>
>
> HADOOP-10597 added support for RPC congestion control by sending retriable 
> 'server too busy' exceptions to clients. 
> However every backoff results in a log message. We've seen these log messages 
> slow down the NameNode.
> {code}
> 2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 127.0.0.1 threw exception 
> [org.apache.hadoop.ipc.RetriableException: Server is too busy.]
> {code}
> We already have a metric that tracks the number of backoff events. This log 
> message adds nothing useful.
> The IPC Server should also allow services to skip logging certain exception 
> types altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12903) IPC Server should allow suppressing exception logging by type, not log 'server too busy' messages

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184031#comment-15184031
 ] 

Hadoop QA commented on HADOOP-12903:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12903 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791863/HADOOP-12903.01.patch 
|
| JIRA Issue | HADOOP-12903 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8811/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> IPC Server should allow suppressing exception logging by type, not log 
> 'server too busy' messages
> -
>
> Key: HADOOP-12903
> URL: https://issues.apache.org/jira/browse/HADOOP-12903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12903.01.patch
>
>
> HADOOP-10597 added support for RPC congestion control by sending retriable 
> 'server too busy' exceptions to clients. 
> However every backoff results in a log message. We've seen these log messages 
> slow down the NameNode.
> {code}
> 2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 127.0.0.1 threw exception 
> [org.apache.hadoop.ipc.RetriableException: Server is too busy.]
> {code}
> We already have a metric that tracks the number of backoff events. This log 
> message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12903) IPC Server should allow suppressing exception logging by type, not log 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12903:
---
Attachment: HADOOP-12903.01.patch

v01 patch adds {{Server#ExceptionsHandler}} support to avoid logging exceptions 
by class in . No exceptions are added to the suppress list, but services e.g. 
NameNode could add certain exceptions to this list in the future.

Additionally 'Server too busy' errors are completely skipped.

> IPC Server should allow suppressing exception logging by type, not log 
> 'server too busy' messages
> -
>
> Key: HADOOP-12903
> URL: https://issues.apache.org/jira/browse/HADOOP-12903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12903.01.patch
>
>
> HADOOP-10597 added support for RPC congestion control by sending retriable 
> 'server too busy' exceptions to clients. 
> However every backoff results in a log message. We've seen these log messages 
> slow down the NameNode.
> {code}
> 2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 127.0.0.1 threw exception 
> [org.apache.hadoop.ipc.RetriableException: Server is too busy.]
> {code}
> We already have a metric that tracks the number of backoff events. This log 
> message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12903) IPC Server should allow suppressing exception logging by type, not log 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12903:
---
Status: Patch Available  (was: Open)

> IPC Server should allow suppressing exception logging by type, not log 
> 'server too busy' messages
> -
>
> Key: HADOOP-12903
> URL: https://issues.apache.org/jira/browse/HADOOP-12903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12903.01.patch
>
>
> HADOOP-10597 added support for RPC congestion control by sending retriable 
> 'server too busy' exceptions to clients. 
> However every backoff results in a log message. We've seen these log messages 
> slow down the NameNode.
> {code}
> 2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 127.0.0.1 threw exception 
> [org.apache.hadoop.ipc.RetriableException: Server is too busy.]
> {code}
> We already have a metric that tracks the number of backoff events. This log 
> message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-07 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12886:
---
Status: Patch Available  (was: Open)

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Attachments: HADOOP-12886.001.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12903) IPC Server should allow suppressing exception logging by type, not log 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12903:
---
Summary: IPC Server should allow suppressing exception logging by type, not 
log 'server too busy' messages  (was: Add IPC Server support for suppressing 
exceptions by type, suppress 'server too busy' messages)

> IPC Server should allow suppressing exception logging by type, not log 
> 'server too busy' messages
> -
>
> Key: HADOOP-12903
> URL: https://issues.apache.org/jira/browse/HADOOP-12903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> HADOOP-10597 added support for RPC congestion control by sending retriable 
> 'server too busy' exceptions to clients. 
> However every backoff results in a log message. We've seen these log messages 
> slow down the NameNode.
> {code}
> 2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 
> for port 8020: readAndProcess from client 127.0.0.1 threw exception 
> [org.apache.hadoop.ipc.RetriableException: Server is too busy.]
> {code}
> We already have a metric that tracks the number of backoff events. This log 
> message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12903) Add IPC Server support for suppressing exceptions by type, suppress 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12903:
--

 Summary: Add IPC Server support for suppressing exceptions by 
type, suppress 'server too busy' messages
 Key: HADOOP-12903
 URL: https://issues.apache.org/jira/browse/HADOOP-12903
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.2
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HADOOP-10597 added support for RPC congestion control by sending retriable 
'server too busy' exceptions to clients. 

However every backoff results in a log message. We've seen these log messages 
slow down the NameNode.
{code}
2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 127.0.0.1 threw exception 
[org.apache.hadoop.ipc.RetriableException: Server is too busy.]
{code}

We already have a metric that tracks the number of backoff events. This log 
message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12789) log classpath of ApplicationClassLoader at INFO level

2016-03-07 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183930#comment-15183930
 ] 

Sangjin Lee commented on HADOOP-12789:
--

Any takers? This is a pretty trivial change.

> log classpath of ApplicationClassLoader at INFO level
> -
>
> Key: HADOOP-12789
> URL: https://issues.apache.org/jira/browse/HADOOP-12789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-12789.01.patch
>
>
> Currently {{ApplicationClassLoader}} does not log the classpath at the INFO 
> level although the system classes are logged at that level. Knowing exactly 
> what classpath {{ApplicationClassLoader}} has is a critical piece of 
> information for troubleshooting. We should log it at the INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12895) SSLFactory#createSSLSocketFactory exception message is wrong

2016-03-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183878#comment-15183878
 ] 

Andrew Wang commented on HADOOP-12895:
--

Hi Wei-chiu, good spot here, do you think it's better to make the message say 
"Factory is not in  mode" or "Factory is not in  mode. Actual mode is 
". so it matches up with the logic of the if check? We may never add another 
Mode to the enum, but it seems like good practice.

> SSLFactory#createSSLSocketFactory exception message is wrong
> 
>
> Key: HADOOP-12895
> URL: https://issues.apache.org/jira/browse/HADOOP-12895
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HADOOP-12895.001.patch, HADOOP-12895.002.patch
>
>
> If in SERVER model, the following code should throw exception indicating 
> Factory is in SERVER mode, not in CLIENT mode. Otherwise, it could be 
> confusing.
> {code:title=SSLSocketFactory.java}
> public SSLSocketFactory createSSLSocketFactory()
> throws GeneralSecurityException, IOException {
> if (mode != Mode.CLIENT) {
>   throw new IllegalStateException("Factory is in CLIENT mode");
> }
> return context.getSocketFactory();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12902) JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter

2016-03-07 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-12902:
--

 Summary: JavaDocs for SignerSecretProvider are out-of-date in 
AuthenticationFilter
 Key: HADOOP-12902
 URL: https://issues.apache.org/jira/browse/HADOOP-12902
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Robert Kanter


The Javadocs in {{AuthenticationFilter}} say:
{noformat}
 * Out of the box it provides 3 signer secret provider implementations:
 * "string", "random", and "zookeeper"
{noformat}
However, the "string" implementation is no longer available because 
HADOOP-11748 moved it to be a test-only artifact.  This also doesn't mention 
anything about the file-backed secret provider ({{FileSignerSecretProvider}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183829#comment-15183829
 ] 

Allen Wittenauer commented on HADOOP-11792:
---

FYI: HADOOP-12892

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183813#comment-15183813
 ] 

Hudson commented on HADOOP-12901:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9439 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9439/])
HADOOP-12901. Add warning log when KMSClientProvider cannot create a (wang: rev 
391da36d93358038c50c15d91543f6c765fa0471)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12892:
--
Attachment: HADOOP-12892.00.patch

-00:
* dev-support/create-release.sh renamed to dev-support/bin/create-release
* add docker support
* native is now optional
* add more native build options to match docker image
* replace changes and release notes handling to use yetus
* add support for gpg signing
* add some files to .gitignore
* detect which jvm is in use in the pom and use that version in docker mode 
(this fixes a bug with mvn site:stage when using certain versions of JDK8?)
* set a location for log output and artifacts output to make the screen have a 
bit more sanity when running interactively
* pull lots of shell code out of poms
* properly parameterize the shell code rather than be dependent upon maven 
replacement
* massive shell code cleanup
* upgrade yetus to 0.2.0 so that rdm works as expected
* properly set defaults for various native lib vars

TODO:
* custom maven cache, esp important when building on jenkins since there is a 
race condition otherwise (which likely means our builds are very suspect)
* set custom hadoop version
* test on windows
* 

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183820#comment-15183820
 ] 

Allen Wittenauer edited comment on HADOOP-12892 at 3/7/16 10:07 PM:


-00:
* dev-support/create-release.sh renamed to dev-support/bin/create-release
* add docker support
* native is now optional
* add more native build options to match docker image
* replace changes and release notes handling to use yetus
* add support for gpg signing
* add some files to .gitignore
* detect which jvm is in use in the pom and use that version in docker mode 
(this fixes a bug with mvn site:stage when using certain versions of JDK8?)
* set a location for log output and artifacts output to make the screen have a 
bit more sanity when running interactively
* pull lots of shell code out of poms
* properly parameterize the shell code rather than be dependent upon maven 
replacement
* massive shell code cleanup
* upgrade yetus to 0.2.0 so that rdm works as expected
* properly set defaults for various native lib vars

TODO:
* custom maven cache, esp important when building on jenkins since there is a 
race condition otherwise (which likely means our builds are very suspect)
* set custom hadoop version
* test bundling
* test on windows


was (Author: aw):
-00:
* dev-support/create-release.sh renamed to dev-support/bin/create-release
* add docker support
* native is now optional
* add more native build options to match docker image
* replace changes and release notes handling to use yetus
* add support for gpg signing
* add some files to .gitignore
* detect which jvm is in use in the pom and use that version in docker mode 
(this fixes a bug with mvn site:stage when using certain versions of JDK8?)
* set a location for log output and artifacts output to make the screen have a 
bit more sanity when running interactively
* pull lots of shell code out of poms
* properly parameterize the shell code rather than be dependent upon maven 
replacement
* massive shell code cleanup
* upgrade yetus to 0.2.0 so that rdm works as expected
* properly set defaults for various native lib vars

TODO:
* custom maven cache, esp important when building on jenkins since there is a 
race condition otherwise (which likely means our builds are very suspect)
* set custom hadoop version
* test on windows
* 

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12892:
--
Status: Patch Available  (was: Open)

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183814#comment-15183814
 ] 

Xiao Chen commented on HADOOP-12901:


Thanks Andrew for the prompt review and commit. :)

> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12901:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk, branch-2, branch-2.8. Thanks Xiao for the find and fix!

> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183778#comment-15183778
 ] 

Hadoop QA commented on HADOOP-12901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 42s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791815/HADOOP-12901.01.patch 
|
| JIRA Issue | HADOOP-12901 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db885f2b7b9f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183738#comment-15183738
 ] 

Hadoop QA commented on HADOOP-12855:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791808/HADOOP-12855-003.patch
 |
| JIRA Issue | HADOOP-12855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d80e33693080 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183690#comment-15183690
 ] 

Andrew Wang commented on HADOOP-12901:
--

+1 LGTM pending Jenkins, thanks Xiao!

> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12901:
---
Status: Patch Available  (was: Open)

> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12901:
---
Attachment: HADOOP-12901.01.patch

Patch 1 adds a {{WARN}} log on this. No stacktrace is printed in the log here, 
because IMO just the host information should be enough.

I verified the logging by programmatically injecting a 
{{SocketTimeoutException}} in the {{doAs}} block, and running a {{TestKMS}} 
test case.

> Add warning log when KMSClientProvider cannot create a connection to the KMS 
> server
> ---
>
> Key: HADOOP-12901
> URL: https://issues.apache.org/jira/browse/HADOOP-12901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-12901.01.patch
>
>
> Currently when failed to connect, one can only see logs at debug level with a 
> vague {{SocketTimeoutException}}. (Luckily in the env I saw this, 
> HADOOP-10015 was not present, so the last WARN log was printed.)
> {noformat}
> 2015-12-17 12:28:01,410 DEBUG 
> org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hdfs 
> (auth:SIMPLE) 
> from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
>  
> 2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:hdfs (auth:SIMPLE) 
> cause:java.net.SocketTimeoutException: connect timed out 
> {noformat}
> The issue was then fixed by opening up the KMS server port.
> This jira is to propose we add specific loggings in this case, to help user 
> easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183584#comment-15183584
 ] 

Hadoop QA commented on HADOOP-12888:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 2 
new + 37 unchanged - 0 fixed = 39 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 17s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791802/HADOOP-12888-002.patch
 |
| JIRA Issue | HADOOP-12888 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Created] (HADOOP-12901) Add warning log when KMSClientProvider cannot create a connection to the KMS server

2016-03-07 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-12901:
--

 Summary: Add warning log when KMSClientProvider cannot create a 
connection to the KMS server
 Key: HADOOP-12901
 URL: https://issues.apache.org/jira/browse/HADOOP-12901
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Minor


Currently when failed to connect, one can only see logs at debug level with a 
vague {{SocketTimeoutException}}. (Luckily in the env I saw this, HADOOP-10015 
was not present, so the last WARN log was printed.)
{noformat}
2015-12-17 12:28:01,410 DEBUG org.apache.hadoop.security.UserGroupInformation: 
PrivilegedAction as:hdfs (auth:SIMPLE) 
from:org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:477)
 
2015-12-17 12:28:01,469 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs (auth:SIMPLE) 
cause:java.net.SocketTimeoutException: connect timed out 
{noformat}
The issue was then fixed by opening up the KMS server port.

This jira is to propose we add specific loggings in this case, to help user 
easily identify the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12855:

Attachment: HADOOP-12855-003.patch

Patch 003:
* Use getAndIncrement and decrementAndGet
* Back out unrelated checkstyle fix: removing an unused import

> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, test
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
> Attachments: HADOOP-12855-001.patch, HADOOP-12855-002.patch, 
> HADOOP-12855-003.patch
>
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183542#comment-15183542
 ] 

Chris Nauroth commented on HADOOP-12892:


bq. I'll throw some comments in the code so that others aren't confused by it.

Great idea.  Much appreciated.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183540#comment-15183540
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

OK, that kind of makes sense now.  I missed that Windows wasn't setting 
LIB_DIR.  I'll throw some comments in the code so that others aren't confused 
by it.  Thanks!

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183530#comment-15183530
 ] 

Allen Wittenauer commented on HADOOP-12899:
---

It would be good to lookup bash from the environment, but honestly it's an 
extreme edge case.  So if it needs to be hard-coded, then so be it.  My big 
goal was mainly to pull large chunks of shell out of the pom files so that they 
could get some love from shellcheck and to make them easier to debug.  I'm in 
the process of doing the same thing with HADOOP-12892. 

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183508#comment-15183508
 ] 

Chris Nauroth commented on HADOOP-12899:


On Windows, it can't rely on the bang path or even really a .sh extension.  
Before HADOOP-12850, this worked by having exec-maven-plugin invoke bash 
directly in the {{executable}} attribute.  I did a quick hack change locally to 
confirm that switching back to direct invocation of bash would work.

[~aw], I assume you want to retain the behavior of invoking through 
/usr/bin/env so that it can look up bash from the user environment, right?  Can 
you confirm?  If we want to keep that, then I think it puts me on the path of 
setting up a separate native-win profile for the special case in 
hadoop-dist/pom.xml.

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12900) distcp -delete should show a counter of files deleted

2016-03-07 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-12900:
---

 Summary: distcp -delete should show a counter of files deleted
 Key: HADOOP-12900
 URL: https://issues.apache.org/jira/browse/HADOOP-12900
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Command {{hadoop distcp -delete -update src dst}} does not show any counter 
information about the files deleted.

Probably needs a new counter section "DistCp Copy Committer Counters" since the 
CopyCommitter deletes the missing files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-03-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183487#comment-15183487
 ] 

Xiao Chen commented on HADOOP-12811:


Thanks Yufeng for creating this. I've linked HDFS-9427 which will change HDFS 
default ports.

I'd like to work on this, but I'm thinking to wait for HDFS-9427 to be agreed 
and committed to avoid conflicts. Then I plan to change the default kms port to 
be in the same range (e,g, 9170 if HDFS-9427 goes with 9070).

The only problem I can think of this change is backwards compatibility. Added a 
label to this jira.

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>  Labels: incompatible, patch
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-03-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-12811:
--

Assignee: Xiao Chen

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>Assignee: Xiao Chen
>  Labels: incompatible, patch
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-03-07 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183486#comment-15183486
 ] 

Jonathan Hsieh commented on HADOOP-12811:
-

I'll suggest moving this into the 9xxx range where the rest of the hadoop 
services are being moved to by HDFS-9427.

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>  Labels: incompatible, patch
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-03-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12811:
---
Labels: incompatible patch  (was: patch)

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>  Labels: incompatible, patch
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183456#comment-15183456
 ] 

Allen Wittenauer commented on HADOOP-12899:
---

Wow, weird. I wonder if the problem is using usr/bin/env as the bang path, 
given other exec-maven-plugin bits work?

> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-07 Thread Costin Leau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Costin Leau updated HADOOP-12888:
-
Attachment: HADOOP-12888-002.patch

Fix checkstyle errors.

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
> Attachments: HADOOP-12888-001.patch, HADOOP-12888-002.patch
>
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183433#comment-15183433
 ] 

Chris Nauroth commented on HADOOP-12892:


[~aw], yes, this is Windows-specific logic.  Windows wants DLLs to be in bin 
instead of the standard lib directory structure.  On Windows, the native build 
artifacts get placed into a bin sub-directory as part of the hadoop-common 
build.  The hadoop-common pom.xml also sets bundle.snappy.in.bin to true, so 
that later the distro knows snappy.dll needs to go here.

bq. This seems to imply that on Windows (which appears to be the only place 
this is used), the libraries have been copied to two different directories...

I don't think so, because there are {{-d}} checks to trigger the copies only if 
the directories exist.  On Windows, {{LIB_DIR}} will not exist and {{BIN_DIR}} 
will exist, so it will only do the copy for {{BIN_DIR}}.  On non-Windows, 
{{LIB_DIR}} will exist and {{BIN_DIR}} will not exist, so it will only do the 
copy for {{LIB_DIR}}.

BTW, it looks like lifting out the dist-layout-stitching script to an external 
file on trunk isn't working for Windows.  I filed HADOOP-12899, and I'll take a 
closer look.


> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183431#comment-15183431
 ] 

Chris Nauroth commented on HADOOP-12899:


{code}
mvn clean install -Pdist -Dtar -Dbundle.snappy -Dsnappy.lib=C:\snappy\lib 
-DskipTests -Dmaven.javadoc.skip=true
{code}

{code}
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
(dist) on project hadoop-dist: Command execution failed. Cannot run program 
"C:\hdc\hadoop-dist\..\dev-support\bin\dist-layout-stitching" (in directory 
"C:\hdc\hadoop-dist\target"): CreateProcess error=193, %1 is not a valid Win32 
application -> [Help 1]
{code}


> External dist-layout-stitching script does not work correctly on Windows.
> -
>
> Key: HADOOP-12899
> URL: https://issues.apache.org/jira/browse/HADOOP-12899
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> In HADOOP-12850, we pulled the dist-layout-stitching script out of 
> hadoop-dist/pom.xml and into an external script.  It appears this change is 
> not working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183435#comment-15183435
 ] 

Hadoop QA commented on HADOOP-12862:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 4 
new + 34 unchanged - 0 fixed = 38 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 54s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 14s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791781/HADOOP-12862.003.patch
 |
| JIRA Issue | HADOOP-12862 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  

[jira] [Commented] (HADOOP-12850) pull shell code out of hadoop-dist

2016-03-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183430#comment-15183430
 ] 

Chris Nauroth commented on HADOOP-12850:


I filed HADOOP-12899 for a follow-up required for Windows.

> pull shell code out of hadoop-dist
> --
>
> Key: HADOOP-12850
> URL: https://issues.apache.org/jira/browse/HADOOP-12850
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-12850.00.patch
>
>
> Let's pull the shell code out of the hadoop-dist pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12898) Fix misc doc issues in DistCp

2016-03-07 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183426#comment-15183426
 ] 

John Zhuge commented on HADOOP-12898:
-

Will check for other issues in DistCp doc.

> Fix misc doc issues in DistCp
> -
>
> Key: HADOOP-12898
> URL: https://issues.apache.org/jira/browse/HADOOP-12898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, tools/distcp
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Typo in http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html
> | The deletion is done by FS Shell. So the trash will be used, if it is 
> enable.
> Should be "enabled".
> Maybe the whole sentence can be rewritten as:
> | If HDFS Trash is enabled, the files will be moved to the trash folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12899) External dist-layout-stitching script does not work correctly on Windows.

2016-03-07 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12899:
--

 Summary: External dist-layout-stitching script does not work 
correctly on Windows.
 Key: HADOOP-12899
 URL: https://issues.apache.org/jira/browse/HADOOP-12899
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
 Environment: Windows
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker


In HADOOP-12850, we pulled the dist-layout-stitching script out of 
hadoop-dist/pom.xml and into an external script.  It appears this change is not 
working correctly on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12898) Fix misc doc issues in DistCp

2016-03-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12898:

Summary: Fix misc doc issues in DistCp  (was: Typo in DistCp.html)

> Fix misc doc issues in DistCp
> -
>
> Key: HADOOP-12898
> URL: https://issues.apache.org/jira/browse/HADOOP-12898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, tools/distcp
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Typo in http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html
> | The deletion is done by FS Shell. So the trash will be used, if it is 
> enable.
> Should be "enabled".
> Maybe the whole sentence can be rewritten as:
> | If HDFS Trash is enabled, the files will be moved to the trash folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12898) Typo in DistCp.html

2016-03-07 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge moved HDFS-9915 to HADOOP-12898:
---

Affects Version/s: (was: 2.7.2)
   2.7.2
  Component/s: (was: distcp)
   (was: documentation)
   tools/distcp
   documentation
   Issue Type: Bug  (was: Improvement)
  Key: HADOOP-12898  (was: HDFS-9915)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Typo in DistCp.html
> ---
>
> Key: HADOOP-12898
> URL: https://issues.apache.org/jira/browse/HADOOP-12898
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, tools/distcp
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Typo in http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html
> | The deletion is done by FS Shell. So the trash will be used, if it is 
> enable.
> Should be "enabled".
> Maybe the whole sentence can be rewritten as:
> | If HDFS Trash is enabled, the files will be moved to the trash folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12885) An operation of atomic fold rename crashed in Wasb FileSystem

2016-03-07 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183368#comment-15183368
 ] 

Gaurav Kanade commented on HADOOP-12885:


[~liushaohui] I remember you reported a similar JIRA 
https://issues.apache.org/jira/browse/HADOOP-12884 and that was reported as dup 
and resolved. I understand the symptom here is slightly different but just 
checking did you retry with the appropriate versions that had the fix for the 
previous issue and were you able to repro it with appropriate updated versions? 
If not there is a possibility that the issue may be self-resolved with the 
latest versions. Just something to verify

[~onpduo], [~pravinmittal] for context

> An operation of atomic fold rename crashed in Wasb FileSystem
> -
>
> Key: HADOOP-12885
> URL: https://issues.apache.org/jira/browse/HADOOP-12885
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: hbase.16773.log
>
>
> An operation of atomic fold rename crashed in Wasb FileSystem
> {code}
> org.apache.hadoop.fs.azure.AzureException: Source blob 
> hbase/azurtst-xiaomi/data/default/YCSBTest/5f882f5492c90b4c03a26561a2ee0a96/.regioninfo
>  does not exist.
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2405)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:413)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1997)
> {code}
> The problem is that there are duplicated file is the RenamePending.json. 
> {code}
> "5f882f5492c90b4c03a26561a2ee0a96",   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.tmp", 
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C",
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/recovered.edits", 
> {code}
> Maybe there is a bug in the Listing of all the files in the folder in Wasb. 
> Any suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183354#comment-15183354
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

{code}
if [[ "${bundle.snappy.in.bin}" == "true" ]]; then
  if [[ "${bundle.snappy}" == "true" ]]; then
cd "${snappy.lib}"
$$TAR *snappy* | (cd $${TARGET_BIN_DIR}/; $$UNTAR)
if [[ $? -ne 0 ]]; then
  echo "Bundling snappy bin files failed"
  exit 1
fi
  fi
{code}

Ping [~cnauroth].  I have a question about the logic here...

Why does this cd to snappy.lib and copy the contents into bin when all those 
bits should have already been copied as part of the bundle.snappy into the lib 
directory? This seems to imply that on Windows (which appears to be the only 
place this is used), the libraries have been copied to two different 
directories...

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Attachment: HADOOP-12862.003.patch

Rev03: docs update + refactoring.

* Updated core-default.xml to add passwords for keystore and
truststore.
* Updated GroupsMapping.md to add truststore specific content.
* Refactored LdapGroupsMapping code a bit, so that keystore/trust store
info will only be read if ssl is enabled.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-03-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183215#comment-15183215
 ] 

Jason Lowe commented on HADOOP-12803:
-

Approach looks OK to me.  Comments on the patch:

- If we're adding logging to a class then we should use SLF4J instead of 
commons logging per the discussion on the dev mailing lists.  It allows us to 
avoid the debug check without significant performance impact.
- The addition of treating properties like we do the environment variable seems 
somewhat unrelated to the change and adds new challenges like what to do when 
both are specified but one conflicts with the other.


> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch, HADOOP-12803.002.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183143#comment-15183143
 ] 

Wei-Chiu Chuang commented on HADOOP-12692:
--

Unassign myself as I've not been able to find a solution and I've been occupied 
with other higher priority issues.

> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
> Attachments: HADOOP-12692.001.patch
>
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12692:
-
Assignee: (was: Wei-Chiu Chuang)

> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
> Attachments: HADOOP-12692.001.patch
>
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2016-03-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12897:
---

 Summary: KerberosAuthenticator.authenticate to include URL on IO 
failures
 Key: HADOOP-12897
 URL: https://issues.apache.org/jira/browse/HADOOP-12897
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.8.0
Reporter: Steve Loughran


If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
get a stack trace, but without the URL it is trying to talk to.

That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2016-03-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183099#comment-15183099
 ] 

Steve Loughran commented on HADOOP-12897:
-

Stack
{code}
   java.net.SocketTimeoutException: Read timed out
  at java.net.SocketInputStream.socketRead0(Native Method)
  at java.net.SocketInputStream.read(SocketInputStream.java:152)
  at java.net.SocketInputStream.read(SocketInputStream.java:122)
  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
  at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
  at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
  at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1324)
  at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
  at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:191)
  at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
  at 
org.apache.spark.deploy.history.yarn.rest.SpnegoUrlConnector$$anonfun$1.apply(SpnegoUrlConnector.scala:127)
  at 
org.apache.spark.deploy.history.yarn.rest.SpnegoUrlConnector$$anonfun$1.apply(SpnegoUrlConnector.scala:124)
  at 
org.apache.spark.deploy.history.yarn.rest.PrivilegedFunction.run(PrivilegedFunction.scala:31)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
{code}

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2016-03-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12897:

Priority: Minor  (was: Major)

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-03-07 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-12891:
--
Description: 
In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size 
are very high [1],

{noformat}
/** Default size threshold for Amazon S3 object after which multi-part copy 
is initiated. */
private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;

/** Default minimum size of each part for multi-part copy. */
private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
{noformat}

In internal testing we have found that a lower but still reasonable threshold 
and chunk size can be extremely beneficial. In our case we set both the 
threshold and size to 25 MB with good results.

Amazon enforces a minimum of 5 MB [2].

For the S3A filesystem, file renames are actually implemented via a remote copy 
request, which is already quite slow compared to a rename on HDFS. This very 
high threshold for utilizing the multipart functionality can make the 
performance considerably worse, particularly for files in the 100MB to 5GB 
range which is fairly typical for mapreduce job outputs.

Two apparent options are:

1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
{{fs.s3a.multipart.size}}) for both. This seems preferable as the accompanying 
documentation [3] for these configuration properties actually already says that 
they are applicable for either "uploads or copies". We just need to add in the 
missing {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
{{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] like:

{noformat}
/* Handle copies in the same way as uploads. */
transferConfiguration.setMultipartCopyPartSize(partSize);
transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
{noformat}

2) Add two new configuration properties so that the copy threshold and part 
size can be independently configured, maybe change the defaults to be lower 
than Amazon's, set into {{TransferManagerConfiguration}} in the same way.

In any case at a minimum if neither of the above options are acceptable changes 
the config documentation should be adjusted to match the code, noting that 
{{fs.s3a.multipart.threshold}} and {{fs.s3a.multipart.size}} are applicable to 
uploads of new objects only and not copies (i.e. renaming objects).

[1] 
https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
[2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
[3] 
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
[4] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
[5] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
[6] 
https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286

  was:
In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size 
are very high [1],

{noformat}
/** Default size threshold for Amazon S3 object after which multi-part copy 
is initiated. */
private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;

/** Default minimum size of each part for multi-part copy. */
private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
{noformat}

In internal testing we have found that a lower but still reasonable threshold 
and chunk size can be extremely beneficial. In our case we set both the 
threshold and size to 25 MB with good results.

Amazon enforces a minimum of 5 MB [2].

For the S3A filesystem, file renames are actually implemented via a remote copy 
request, which is already quite slow compared to a rename on HDFS. This very 
high threshold for utilizing the multipart functionality can make the 
performance considerably worse, particularly for files in the 100MB to 5GB 
range which is fairly typical for mapreduce job outputs.

Two apparent options are:

1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
{{fs.s3a.multipart.size}}) for both. This seems preferable as the accompanying 
documentation [3] for these configuration properties actually already says that 
they are applicable for either "uploads or copies". We just need to add in the 
missing {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
{{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] like:

{noformat}
/* Handle copies in the same way as uploads. */
transferConfiguration.setMultipartCopyPartSize(partSize);

[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183016#comment-15183016
 ] 

Hadoop QA commented on HADOOP-12666:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s 
{color} | {color:red} root: patch generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red} 0m 2s {color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 3s {color} 
| 

[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183000#comment-15183000
 ] 

Hadoop QA commented on HADOOP-12875:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: hadoop-tools 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: hadoop-tools 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 31m 38s 
{color} | {color:green} hadoop-tools in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 32m 31s 
{color} | {color:green} hadoop-tools in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791157/Hadoop-12875-001.patch
 |
| JIRA Issue | 

[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-07 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Status: Patch Available  (was: In Progress)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-03-07 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12875:
---
Status: Patch Available  (was: Open)

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Hadoop-12875-001.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-07 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Attachment: HADOOP-12666-007.patch

>From the code review comments, there were 5 major issues that we need to call 
>out, we have listed below the actions we have taken to address those comments

 * *Synchronization during read stream* – Though the use case seems to be 
unusual to allow concurrent access on stream. Accepted comment to have 
synchronized blocks across. Synchronized blocks are causing additional Avg. 7ms 
latency during read operation.
 * *Patch size too large & inclusion of Live test cases* – we have Split the 
patch into multiple JIRAs
 ** HADOOP-12666 - core updates
 ** HADOOP-12875 - Test updates including mechanism for live updates. 
 ** HADOOP-12876 -  file metadata cache management
 ** Separate JIRAs for Telemetry and instrumentation related updates would be 
raised once we agree and commit Hadoop-12666
 * *FileStatus Cache management* – Raised separate JIRA HADOOP-12876 to cover 
specific discussion around the cache.
 * *Package namespace (Remove dependency from org.apache.hadoop.hdfs.web)* – In 
order to remove dependency from org.apache.hadoop.hdfs.web package. One of the 
proposal (https://reviews.apache.org/r/44169/)is to modify access level for the 
dependent parts in org.apache.hadoop.hdfs.web package to public. 
 * *Allow Webhdfs and Adl file system to coexist (Common configuration 
parameter like dfs.webhdfs.oauth2.refresh.token )* – As of today only Adl is 
compliant to oauth2 protocol. To support only adl specific configuration 
requires changes in ASF code. We would like to take this as separate change set 
than to cover as part of this change set.   


> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-007.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-07 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Status: In Progress  (was: Patch Available)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-03-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182799#comment-15182799
 ] 

Steve Loughran commented on HADOOP-12891:
-

One thing to consider is block size for bulk operations

if, at some point in the future, AWS were to provide a way to determine the 
block sizes, to make the best use of it you'd want to have "reasonably" sized 
partitions, where 'reasonable' includes setup costs of work. Of course, since 
there's no locality cost, small partitions could perhaps be merged to create 
the illusion of bigger blocks; it'd only be a hint to the amount of parallelism 
that can be applied to s3 reads

> S3AFileSystem should configure Multipart Copy threshold and chunk size
> --
>
> Key: HADOOP-12891
> URL: https://issues.apache.org/jira/browse/HADOOP-12891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Andrew Olson
>
> In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk 
> size are very high [1],
> {noformat}
> /** Default size threshold for Amazon S3 object after which multi-part 
> copy is initiated. */
> private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;
> /** Default minimum size of each part for multi-part copy. */
> private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
> {noformat}
> In internal testing we have found that a lower but still reasonable threshold 
> and chunk size can be extremely beneficial. In our case we set both the 
> threshold and size to 25 MB with good results.
> Amazon enforces a minimum of 5 MB [2].
> For the S3A filesystem, file renames are actually implemented via a remote 
> copy request, which is already quite slow compared to a rename on HDFS. This 
> very high threshold for utilizing the multipart functionality can make the 
> performance considerably worse, particularly for files in the 100MB to 5GB 
> range which is fairly typical for mapreduce job outputs.
> Two apparent options are:
> 1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
> {{fs.s3a.multipart.size}}) for both. This seems preferable as the 
> accompanying documentation [3] for these configuration properties actually 
> already says that they are applicable for either "uploads or copies". We just 
> need to add in the missing 
> {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
> {{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] 
> like:
> {noformat}
> /* Handle copies in the same way as uploads. */
> transferConfiguration.setMultipartCopyPartSize(partSize);
> transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
> {noformat}
> 2) Add two new configuration properties so that the copy threshold and part 
> size can be independently configured, maybe change the defaults to be lower 
> than Amazon's, set into {{TransferManagerConfiguration}} in the same way.
> [1] 
> https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
> [2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
> [3] 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
> [4] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
> [5] 
> http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
> [6] 
> https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12896) kdiag to add a --DEFAULTREALM option

2016-03-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12896:
---

 Summary: kdiag to add a --DEFAULTREALM option 
 Key: HADOOP-12896
 URL: https://issues.apache.org/jira/browse/HADOOP-12896
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor
 Fix For: 2.9.0


* kdiag to add a --DEFAULTREALM option to say not having a default realm is an 
error.
* if this flag is unset, when dumping the credential cache, if there is any 
entry without a realm, *and there is no default realm*, diagnostics to fail 
with an error. Hadoop will fail in this situation; kdiag should detect and 
report




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)