[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Status: Patch Available  (was: Open)

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181601#comment-15181601
 ] 

Hudson commented on HADOOP-12717:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9427 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9427/])
HADOOP-12717. NPE when trying to rename a directory in Windows Azure (cnauroth: 
rev c50aad0f854b74ede9668e35db314b0a93be81b2)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAtomicRenameDirList.java


> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.004.patch, 
> HADOOP-12717.02.patch, HADOOP-12717.03.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12717:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v004.  I have committed this to trunk, branch-2 and branch-2.8.  
[~rayokota] and [~gouravk], thank you for the patch.

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.004.patch, 
> HADOOP-12717.02.patch, HADOOP-12717.03.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181584#comment-15181584
 ] 

Hadoop QA commented on HADOOP-12717:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791610/HADOOP-12717.004.patch
 |
| JIRA Issue | HADOOP-12717 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fd3b05b8b14c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2759689 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181583#comment-15181583
 ] 

Hadoop QA commented on HADOOP-12717:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791610/HADOOP-12717.004.patch
 |
| JIRA Issue | HADOOP-12717 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c5bf47318a2f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2759689 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12717:
---
Attachment: HADOOP-12717.004.patch

The patch looks good aside from the license violation that was flagged by the 
last pre-commit run.  I am attaching patch v004, which is the same code, with 
the addition of the Apache license to the new test class.

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Blocker
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.004.patch, 
> HADOOP-12717.02.patch, HADOOP-12717.03.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12893:
--
Description: We need to verify that we're abiding by the legal terms set 
forth by all of our dependencies. We need to  make sure that our LICENSE.txt 
and NOTICE.txt list all the licenses as appropriate for source and binary 
artifacts.  In particular, we statically link several libraries like OpenSSL 
into libhadoop, libhdfs, pipes, and probably others when -Pnative is used but 
have no reference at all to it's licensing terms.  (was: We need to verify that 
we're abiding by the legal terms set forth by all of our dependencies. We need 
to  make sure that our LICENSE.txt and NOTICE.txt list all the licenses as 
appropriate for source and binary artifacts.  In particular, we statically link 
several libraries into libhadoop, libhdfs, pipes, and probably others when 
-Pnative is used.)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We need to verify that we're abiding by the legal terms set forth by all of 
> our dependencies. We need to  make sure that our LICENSE.txt and NOTICE.txt 
> list all the licenses as appropriate for source and binary artifacts.  In 
> particular, we statically link several libraries like OpenSSL into libhadoop, 
> libhdfs, pipes, and probably others when -Pnative is used but have no 
> reference at all to it's licensing terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12893:
--
Description: We need to verify that we're abiding by the legal terms set 
forth by all of our dependencies. We need to  make sure that our LICENSE.txt 
and NOTICE.txt list all the licenses as appropriate for source and binary 
artifacts.  In particular, we statically link several libraries like OpenSSL 
into libhadoop, libhdfs, pipes, and probably others when -Pnative is used but 
have no reference at all to their licensing terms.  (was: We need to verify 
that we're abiding by the legal terms set forth by all of our dependencies. We 
need to  make sure that our LICENSE.txt and NOTICE.txt list all the licenses as 
appropriate for source and binary artifacts.  In particular, we statically link 
several libraries like OpenSSL into libhadoop, libhdfs, pipes, and probably 
others when -Pnative is used but have no reference at all to it's licensing 
terms.)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We need to verify that we're abiding by the legal terms set forth by all of 
> our dependencies. We need to  make sure that our LICENSE.txt and NOTICE.txt 
> list all the licenses as appropriate for source and binary artifacts.  In 
> particular, we statically link several libraries like OpenSSL into libhadoop, 
> libhdfs, pipes, and probably others when -Pnative is used but have no 
> reference at all to their licensing terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12893:
--
Description: We need to verify that we're abiding by the legal terms set 
forth by all of our dependencies. We need to  make sure that our LICENSE.txt 
and NOTICE.txt list all the licenses as appropriate for source and binary 
artifacts.  In particular, we statically link several libraries into libhadoop, 
libhdfs, pipes, and probably others when -Pnative is used.  (was: We need to 
verify that we're abiding by the legal terms set forth by all of our 
dependencies. We make sure that our LICENSE.txt and NOTICE.txt list all 
appropriate licenses as appropriate for source and binary artifacts.  In 
particular, we statically link several libraries into libhadoop, libhdfs, 
pipes, and probably others when -Pnative is used.)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We need to verify that we're abiding by the legal terms set forth by all of 
> our dependencies. We need to  make sure that our LICENSE.txt and NOTICE.txt 
> list all the licenses as appropriate for source and binary artifacts.  In 
> particular, we statically link several libraries into libhadoop, libhdfs, 
> pipes, and probably others when -Pnative is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12893:
--
Target Version/s: 2.8.0, 3.0.0, 2.7.3, 2.6.5

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We need to verify that we're abiding by the legal terms set forth by all of 
> our dependencies. We make sure that our LICENSE.txt and NOTICE.txt list all 
> appropriate licenses as appropriate for source and binary artifacts.  In 
> particular, we statically link several libraries into libhadoop, libhdfs, 
> pipes, and probably others when -Pnative is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181538#comment-15181538
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

I'm removing -Pnative due to HADOOP-12893 and various other issues with library 
incompatibilities. 

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-12892) fix/rewrite create-release

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12892:
--
Comment: was deleted

(was: I'm removing -Pnative due to HADOOP-12893 and various other issues with 
library incompatibilities. )

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-03-04 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12893:
-

 Summary: Verify LICENSE.txt and NOTICE.txt
 Key: HADOOP-12893
 URL: https://issues.apache.org/jira/browse/HADOOP-12893
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
Reporter: Allen Wittenauer
Priority: Blocker


We need to verify that we're abiding by the legal terms set forth by all of our 
dependencies. We make sure that our LICENSE.txt and NOTICE.txt list all 
appropriate licenses as appropriate for source and binary artifacts.  In 
particular, we statically link several libraries into libhadoop, libhdfs, 
pipes, and probably others when -Pnative is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12717:
---
Priority: Blocker  (was: Critical)

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Blocker
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.02.patch, 
> HADOOP-12717.03.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181508#comment-15181508
 ] 

Hadoop QA commented on HADOOP-12717:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791600/HADOOP-12717.03.patch 
|
| JIRA Issue | HADOOP-12717 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 136fa363ae95 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2759689 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 

[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12717:
---
Attachment: HADOOP-12717.03.patch

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Critical
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.02.patch, 
> HADOOP-12717.03.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181498#comment-15181498
 ] 

Hadoop QA commented on HADOOP-12717:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791595/HADOOP-12717.02.patch 
|
| JIRA Issue | HADOOP-12717 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4dbdbf9ff067 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2759689 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12717:
---
Priority: Critical  (was: Major)

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Critical
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.02.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181492#comment-15181492
 ] 

Allen Wittenauer commented on HADOOP-11792:
---

mvn site due to a bug in doxia's markdown parser will go into an infinite loop 
on unmatched _'s.  yetus 0.1.0 + the release note contents worked around it in 
such a way that broke github rendering.  yetus 0.2.0 + the release note 
contents are now real markdown such that github rendering works.  So that 
problem will go away as soon as yetus 0.2.0 is released (vote closes in a few 
days) and hadoop is configured to use that version by default.  in the mean 
time, you can unpack the yetus 0.2.0 RC2 in a dir and point YETUS_HOME to it. 
that will trigger the rdm in that version instead.

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181490#comment-15181490
 ] 

Gaurav Kanade commented on HADOOP-12717:


[~cnauroth] Added new patch with test with given specs

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.02.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12717:
---
Attachment: HADOOP-12717.02.patch

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, HADOOP-12717.02.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12798) Update changelog and release notes (2016-03-04)

2016-03-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181379#comment-15181379
 ] 

Andrew Wang commented on HADOOP-12798:
--

LGTM +1, though as I mentioned on another JIRA, my site build isn't working 
locally. Trusting that you verified this on your end.

> Update changelog and release notes (2016-03-04)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch, HADOOP-12798.01.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12892) fix/rewrite create-release

2016-03-04 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12892:
-

 Summary: fix/rewrite create-release
 Key: HADOOP-12892
 URL: https://issues.apache.org/jira/browse/HADOOP-12892
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12889) Make kdiag something services can use directly on startup

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181337#comment-15181337
 ] 

Hadoop QA commented on HADOOP-12889:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s 
{color} | {color:red} root: patch generated 19 new + 228 unchanged - 3 fixed = 
247 total (was 231) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 55s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 56s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 10s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 15s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| 

[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15181302#comment-15181302
 ] 

Andrew Wang commented on HADOOP-11792:
--

bq. I suspect it's going to be a lot more than that. Would you rather I take a 
crack at it?

That'd be great. Right now my machine is hanging when calling "mvn site 
-Preleasedocs", which really put a damper on my testing efforts. I was about to 
write you an email, but this is the last log line with debug logging on:

{noformat}
[DEBUG] Generating 
/home/andrew/dev/hadoop/trunk/hadoop-common-project/hadoop-common/target/site/release/3.0.0-SNAPSHOT/RELEASENOTES.3.0.0-SNAPSHOT.html
{noformat}

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12798) Update changelog and release notes (2016-03-04)

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180764#comment-15180764
 ] 

Hadoop QA commented on HADOOP-12798:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 24s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791546/HADOOP-12798.01.patch 
|
| JIRA Issue | HADOOP-12798 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 908a0023184c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8e08861 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8793/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update changelog and release notes (2016-03-04)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch, HADOOP-12798.01.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-03-04 Thread Andrew Olson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-12891:
--
Description: 
In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size 
are very high [1],

{noformat}
/** Default size threshold for Amazon S3 object after which multi-part copy 
is initiated. */
private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;

/** Default minimum size of each part for multi-part copy. */
private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
{noformat}

In internal testing we have found that a lower but still reasonable threshold 
and chunk size can be extremely beneficial. In our case we set both the 
threshold and size to 25 MB with good results.

Amazon enforces a minimum of 5 MB [2].

For the S3A filesystem, file renames are actually implemented via a remote copy 
request, which is already quite slow compared to a rename on HDFS. This very 
high threshold for utilizing the multipart functionality can make the 
performance considerably worse, particularly for files in the 100MB to 5GB 
range which is fairly typical for mapreduce job outputs.

Two apparent options are:

1) Use the same configuration ({{fs.s3a.multipart.threshold}}, 
{{fs.s3a.multipart.size}}) for both. This seems preferable as the accompanying 
documentation [3] for these configuration properties actually already says that 
they are applicable for either "uploads or copies". We just need to add in the 
missing {{TransferManagerConfiguration#setMultipartCopyThreshold}} [4] and 
{{TransferManagerConfiguration#setMultipartCopyPartSize}} [5] calls at [6] like:

{noformat}
/* Handle copies in the same way as uploads. */
transferConfiguration.setMultipartCopyPartSize(partSize);
transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
{noformat}

2) Add two new configuration properties so that the copy threshold and part 
size can be independently configured, maybe change the defaults to be lower 
than Amazon's, set into {{TransferManagerConfiguration}} in the same way.

[1] 
https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
[2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
[3] 
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
[4] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
[5] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
[6] 
https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286

  was:
In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size 
are very high [1],

{noformat}
/** Default size threshold for Amazon S3 object after which multi-part copy 
is initiated. */
private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;

/** Default minimum size of each part for multi-part copy. */
private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
{noformat}

In internal testing we have found that a lower but still reasonable threshold 
and chunk size can be extremely beneficial. In our case we set both the 
threshold and size to 25 MB with good results.

Amazon enforces a minimum of 5 MB [2].

For the S3A filesystem, file renames are actually implemented via a remote copy 
request, which is already quite slow compared to a rename on HDFS. This very 
high threshold for utilizing the multipart functionality can make the 
performance considerably worse, particularly for files in the 100MB to 5GB 
range which is fairly typical for mapreduce job outputs.

Two apparent options are:

1) Use the same configuration (fs.s3a.multipart.threshold, 
fs.s3a.multipart.size) for both. This seems preferable as the accompanying 
documentation [3] for these configuration properties actually already says that 
they are applicable for either "uploads or copies". We just need to add in the 
missing TransferManagerConfiguration#setMultipartCopyThreshold [4] and 
TransferManagerConfiguration#setMultipartCopyPartSize [5] calls at [6] like:

{noformat}
/* Handle copies in the same way as uploads. */
transferConfiguration.setMultipartCopyPartSize(partSize);
transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
{noformat}

2) Add two new configuration properties so that the copy threshold and part 
size can be independently configured, maybe change the defaults to be lower 
than Amazon's, set into TransferManagerConfiguration in the same way.

[1] 

[jira] [Created] (HADOOP-12891) S3AFileSystem should configure Multipart Copy threshold and chunk size

2016-03-04 Thread Andrew Olson (JIRA)
Andrew Olson created HADOOP-12891:
-

 Summary: S3AFileSystem should configure Multipart Copy threshold 
and chunk size
 Key: HADOOP-12891
 URL: https://issues.apache.org/jira/browse/HADOOP-12891
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Andrew Olson


In the AWS S3 Java SDK the defaults for Multipart Copy threshold and chunk size 
are very high [1],

{noformat}
/** Default size threshold for Amazon S3 object after which multi-part copy 
is initiated. */
private static final long DEFAULT_MULTIPART_COPY_THRESHOLD = 5 * GB;

/** Default minimum size of each part for multi-part copy. */
private static final long DEFAULT_MINIMUM_COPY_PART_SIZE = 100 * MB;
{noformat}

In internal testing we have found that a lower but still reasonable threshold 
and chunk size can be extremely beneficial. In our case we set both the 
threshold and size to 25 MB with good results.

Amazon enforces a minimum of 5 MB [2].

For the S3A filesystem, file renames are actually implemented via a remote copy 
request, which is already quite slow compared to a rename on HDFS. This very 
high threshold for utilizing the multipart functionality can make the 
performance considerably worse, particularly for files in the 100MB to 5GB 
range which is fairly typical for mapreduce job outputs.

Two apparent options are:

1) Use the same configuration (fs.s3a.multipart.threshold, 
fs.s3a.multipart.size) for both. This seems preferable as the accompanying 
documentation [3] for these configuration properties actually already says that 
they are applicable for either "uploads or copies". We just need to add in the 
missing TransferManagerConfiguration#setMultipartCopyThreshold [4] and 
TransferManagerConfiguration#setMultipartCopyPartSize [5] calls at [6] like:

{noformat}
/* Handle copies in the same way as uploads. */
transferConfiguration.setMultipartCopyPartSize(partSize);
transferConfiguration.setMultipartCopyThreshold(multiPartThreshold);
{noformat}

2) Add two new configuration properties so that the copy threshold and part 
size can be independently configured, maybe change the defaults to be lower 
than Amazon's, set into TransferManagerConfiguration in the same way.

[1] 
https://github.com/aws/aws-sdk-java/blob/1.10.58/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.java#L36-L40
[2] http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
[3] 
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A
[4] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyThreshold(long)
[5] 
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManagerConfiguration.html#setMultipartCopyPartSize(long)
[6] 
https://github.com/apache/hadoop/blob/release-2.7.2-RC2/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L286



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180671#comment-15180671
 ] 

Allen Wittenauer commented on HADOOP-11792:
---

bq. I'm going to fix the release script, I think it's basically just adding 
"-Preleasedocs"? Will do some before-and-afters to compare.

I suspect it's going to be a lot more than that.  Would you rather I take a 
crack at it?

bq. Cleaning up the releasedocmaker lint errors will be a labor of love. I did 
some 3.0.0 cleanup earlier this week, but did not tackle the branch-2 releases. 
Ultimately this falls on the 2.8/2.9 RMs, but anyone with JIRA permissions can 
help out with this.

I did a big pass through the 5 years of 3.x issues and all of the 2.x issues 
late last year, with some touch up a month or so ago.  They should be in pretty 
good shape really.

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12798) Update changelog and release notes (2016-03-04)

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12798:
--
Attachment: HADOOP-12798.01.patch

-01:
* rebased, updated

> Update changelog and release notes (2016-03-04)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch, HADOOP-12798.01.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12798) Update changelog and release notes (2016-03-04)

2016-03-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12798:
--
Summary: Update changelog and release notes (2016-03-04)  (was: Update 
changelog and release notes (2016-02-12))

> Update changelog and release notes (2016-03-04)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12889) Make kdiag something services can use directly on startup

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12889:

Status: Patch Available  (was: In Progress)

> Make kdiag something services can use directly on startup
> -
>
> Key: HADOOP-12889
> URL: https://issues.apache.org/jira/browse/HADOOP-12889
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12889-001.patch
>
>
> I want the ability to start kdiag as a service launches, without doing 
> anything with side-effects other than usual UGI Init (that is: no keytab 
> login), and hook this up so that services can start it. Then add an option 
> for the YARN and HDFS services to do this on launch (Default: off)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12889) Make kdiag something services can use directly on startup

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12889:

Attachment: HADOOP-12889-001.patch

Patch 001; PoC. 

Sets up a binding, runs KDiag before doing the normal UGI login process, dumps 
the UGI details afterwards.

> Make kdiag something services can use directly on startup
> -
>
> Key: HADOOP-12889
> URL: https://issues.apache.org/jira/browse/HADOOP-12889
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12889-001.patch
>
>
> I want the ability to start kdiag as a service launches, without doing 
> anything with side-effects other than usual UGI Init (that is: no keytab 
> login), and hook this up so that services can start it. Then add an option 
> for the YARN and HDFS services to do this on launch (Default: off)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180511#comment-15180511
 ] 

Hadoop QA commented on HADOOP-12738:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 56s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791523/HADOOP-12738.002.patch
 |
| JIRA Issue | HADOOP-12738 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 92951d995991 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3e8099a |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12798) Update changelog and release notes (2016-02-12)

2016-03-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180480#comment-15180480
 ] 

Allen Wittenauer commented on HADOOP-12798:
---

It needs a rebase and likely an update for the 2.7.x and 2.6.x releases that 
have happened since I ran this last.

> Update changelog and release notes (2016-02-12)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180474#comment-15180474
 ] 

Hadoop QA commented on HADOOP-11212:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 38 unchanged - 1 fixed = 38 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791508/HADOOP-11212-002.patch
 |
| JIRA Issue | HADOOP-11212 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a2026e1089b5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Updated] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-03-04 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12738:

Attachment: HADOOP-12738.002.patch

- Updates based on trunk updates
- Added properties in new class 
org.apache.hadoop.security.CompositeGroupsMapping

> Create unit test to automatically compare Common related classes and 
> core-default.xml
> -
>
> Key: HADOOP-12738
> URL: https://issues.apache.org/jira/browse/HADOOP-12738
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12738.001.patch, HADOOP-12738.002.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> Common related classes and core-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12890) Typo in AbstractService

2016-03-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12890:
-
Labels: newbie  (was: )

> Typo in AbstractService
> ---
>
> Key: HADOOP-12890
> URL: https://issues.apache.org/jira/browse/HADOOP-12890
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mike Drob
>Priority: Trivial
>  Labels: newbie
>
> https://github.com/apache/hadoop/blob/3e8099a45a4cfd4c5c0e3dce4370514cb2c90da9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java#L316
> wil; -> will



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12651) Replace dev-support with wrappers to Yetus

2016-03-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180388#comment-15180388
 ] 

Andrew Wang commented on HADOOP-12651:
--

Thanks for the pointer Allen, let's continue the discussion over on 
HADOOP-12798.

> Replace dev-support with wrappers to Yetus
> --
>
> Key: HADOOP-12651
> URL: https://issues.apache.org/jira/browse/HADOOP-12651
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-12651.00.patch, HADOOP-12651.01.patch, 
> HADOOP-12651.02.patch, HADOOP-12651.03.patch, HADOOP-12651.04.patch
>
>
> Now that Yetus has had a release, we should rip out the components that make 
> it up from dev-support and replace them with wrappers.  The wrappers should:
> * default to a sane version
> * allow for version overrides via an env var
> * download into patchprocess
> * execute with the given parameters
> Marking this as an incompatible change, since we should also remove the 
> filename extensions and move these into a bin directory for better 
> maintenance towards the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12798) Update changelog and release notes (2016-02-12)

2016-03-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180387#comment-15180387
 ] 

Andrew Wang commented on HADOOP-12798:
--

[~aw], anything left to do here besides giving a +1? I can help with the fix 
version cleanup if that's what's required. I'd like to target this for branch-2 
also, given that HADOOP-12651 made it to branch-2.

> Update changelog and release notes (2016-02-12)
> ---
>
> Key: HADOOP-12798
> URL: https://issues.apache.org/jira/browse/HADOOP-12798
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12798.00.patch
>
>
> Added and updated changelog and release notes based upon Yetus 0.2.0-SNAPSHOT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12885) An operation of atomic fold rename crashed in Wasb FileSystem

2016-03-04 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180386#comment-15180386
 ] 

Gaurav Kanade commented on HADOOP-12885:


I agree; [~madhuch-ms] since you worked on this specifically in the latest 
instance does this look like a case that can happen and we might have missed? 

Also is it possible this might be a regression due to latest fixes?

> An operation of atomic fold rename crashed in Wasb FileSystem
> -
>
> Key: HADOOP-12885
> URL: https://issues.apache.org/jira/browse/HADOOP-12885
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Priority: Critical
>
> An operation of atomic fold rename crashed in Wasb FileSystem
> {code}
> org.apache.hadoop.fs.azure.AzureException: Source blob 
> hbase/azurtst-xiaomi/data/default/YCSBTest/5f882f5492c90b4c03a26561a2ee0a96/.regioninfo
>  does not exist.
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2405)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:413)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1997)
> {code}
> The problem is that there are duplicated file is the RenamePending.json. 
> {code}
> "5f882f5492c90b4c03a26561a2ee0a96",   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.tmp", 
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C",
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/recovered.edits", 
> {code}
> Maybe there is a bug in the Listing of all the files in the folder in Wasb. 
> Any suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180383#comment-15180383
 ] 

Andrew Wang commented on HADOOP-11792:
--

I'm going to fix the release script, I think it's basically just adding 
"-Preleasedocs"? Will do some before-and-afters to compare.

Cleaning up the releasedocmaker lint errors will be a labor of love. I did some 
3.0.0 cleanup earlier this week, but did not tackle the branch-2 releases. 
Ultimately this falls on the 2.8/2.9 RMs, but anyone with JIRA permissions can 
help out with this.

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12885) An operation of atomic fold rename crashed in Wasb FileSystem

2016-03-04 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180372#comment-15180372
 ] 

Chris Nauroth commented on HADOOP-12885:


[~liushaohui], interesting...  I think you're right that this one is different. 
 I have not seen cases of duplicates showing up in the rename pending list like 
you showed in the description.  Cc [~gouravk], [~madhuch-ms], [~onpduo], 
[~dchickabasapa] to check if any of them have seen something like this.

> An operation of atomic fold rename crashed in Wasb FileSystem
> -
>
> Key: HADOOP-12885
> URL: https://issues.apache.org/jira/browse/HADOOP-12885
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Priority: Critical
>
> An operation of atomic fold rename crashed in Wasb FileSystem
> {code}
> org.apache.hadoop.fs.azure.AzureException: Source blob 
> hbase/azurtst-xiaomi/data/default/YCSBTest/5f882f5492c90b4c03a26561a2ee0a96/.regioninfo
>  does not exist.
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2405)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:413)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1997)
> {code}
> The problem is that there are duplicated file is the RenamePending.json. 
> {code}
> "5f882f5492c90b4c03a26561a2ee0a96",   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>   
>   
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.regioninfo",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/.tmp", 
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C",
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/8a2c08db432d447d9e0ed5266940b25e",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/9425c621073e41df9430e88f0ef61c01",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/C\/f9fc55a94fa34efbb2d26be77c76187c",  
>
> "5f882f5492c90b4c03a26561a2ee0a96\/recovered.edits", 
> {code}
> Maybe there is a bug in the Listing of all the files in the folder in Wasb. 
> Any suggestions?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12890) Typo in AbstractService

2016-03-04 Thread Mike Drob (JIRA)
Mike Drob created HADOOP-12890:
--

 Summary: Typo in AbstractService
 Key: HADOOP-12890
 URL: https://issues.apache.org/jira/browse/HADOOP-12890
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mike Drob
Priority: Trivial


https://github.com/apache/hadoop/blob/3e8099a45a4cfd4c5c0e3dce4370514cb2c90da9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/AbstractService.java#L316

wil; -> will



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12889) Make kdiag something services can use directly on startup

2016-03-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12889:
---

 Summary: Make kdiag something services can use directly on startup
 Key: HADOOP-12889
 URL: https://issues.apache.org/jira/browse/HADOOP-12889
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I want the ability to start kdiag as a service launches, without doing anything 
with side-effects other than usual UGI Init (that is: no keytab login), and 
hook this up so that services can start it. Then add an option for the YARN and 
HDFS services to do this on launch (Default: off)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12889) Make kdiag something services can use directly on startup

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12889 started by Steve Loughran.
---
> Make kdiag something services can use directly on startup
> -
>
> Key: HADOOP-12889
> URL: https://issues.apache.org/jira/browse/HADOOP-12889
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I want the ability to start kdiag as a service launches, without doing 
> anything with side-effects other than usual UGI Init (that is: no keytab 
> login), and hook this up so that services can start it. Then add an option 
> for the YARN and HDFS services to do this on launch (Default: off)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11212:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11212-001.patch, HADOOP-11212-002.patch
>
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11212:

Attachment: HADOOP-11212-002.patch

# adds that test
# cleans up some of the indentation of multiline arguments that were way off to 
the right of the IDE window
# hardened the assert checking of one test, to use the {{assertInException}} 
assertion


> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11212-001.patch, HADOOP-11212-002.patch
>
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11212:

Status: Open  (was: Patch Available)

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11212-001.patch
>
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180303#comment-15180303
 ] 

Steve Loughran commented on HADOOP-12888:
-

...afraid you'll have to look at the checkstyle issues. Don't worry about the 
test and whitespace complaints

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
> Attachments: HADOOP-12888-001.patch
>
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180261#comment-15180261
 ] 

Hadoop QA commented on HADOOP-12888:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 37 unchanged - 0 fixed = 40 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 3s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791495/HADOOP-12888-001.patch
 |
| JIRA Issue | HADOOP-12888 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HADOOP-12832) Implement unix-like 'FsShell -touch'

2016-03-04 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180217#comment-15180217
 ] 

John Zhuge commented on HADOOP-12832:
-

Sure [~jira.shegalov].

> Implement unix-like 'FsShell -touch' 
> -
>
> Key: HADOOP-12832
> URL: https://issues.apache.org/jira/browse/HADOOP-12832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: John Zhuge
>
> We needed to touch a bunch of files as in 
> https://en.wikipedia.org/wiki/Touch_(Unix) . 
> Because FsShell does not expose FileSystem#setTimes  , we had to do it 
> programmatically in Scalding REPL. Seems like it should not be this 
> complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Costin Leau (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180142#comment-15180142
 ] 

Costin Leau commented on HADOOP-12888:
--

Thanks. Looks like my previous comment was lost; I've improved the patch to 
take care of {{isBashSupported}} field as well (through 
{{checkIsBashSupported}}).

Cheers

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
> Attachments: HADOOP-12888-001.patch
>
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Costin Leau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Costin Leau updated HADOOP-12888:
-
Attachment: HADOOP-12888-001.patch

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
> Attachments: HADOOP-12888-001.patch
>
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Costin Leau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Costin Leau updated HADOOP-12888:
-
Status: Patch Available  (was: Open)

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-03-04 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180108#comment-15180108
 ] 

Ray Chiang commented on HADOOP-12101:
-

RE: Failed unit test

Test passes in my tree.

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch, HADOOP-12101.008.patch, 
> HADOOP-12101.009.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12888:

Assignee: Costin Leau

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>Assignee: Costin Leau
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179995#comment-15179995
 ] 

Allen Wittenauer commented on HADOOP-11792:
---

rdm's lint mode also tags multiple versions as broken.  Unfortunately, Hadoop 
has a ton of committers and PMCs that really have no idea how to properly close 
things in JIRA. :(

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179949#comment-15179949
 ] 

Steve Loughran commented on HADOOP-12888:
-

attach the patch to this JIRA, use a name of the form HADOOP-12888-001.patch, 
hit the the "submit patch" button and it'll be trigger an automatic review. 
Yetus will complain about the lack of tests, but we'll have to go with that. 
Did it work for you in any manual tests

Moving the issue from HDFS to Hadoop as its in the common module

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12888) HDFS client requires compromising permission when running under JVM security manager

2016-03-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-9875 to HADOOP-12888:
---

Affects Version/s: (was: 2.7.2)
   2.7.2
  Component/s: (was: security)
   (was: hdfs-client)
   security
  Key: HADOOP-12888  (was: HDFS-9875)
  Project: Hadoop Common  (was: Hadoop HDFS)

> HDFS client requires compromising permission when running under JVM security 
> manager
> 
>
> Key: HADOOP-12888
> URL: https://issues.apache.org/jira/browse/HADOOP-12888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: Linux
>Reporter: Costin Leau
>
> HDFS _client_ requires dangerous permission, in particular _execute_ on _all 
> files_ despite only trying to connect to an HDFS cluster.
> A full list (for both Hadoop 1 and 2) is available here along with the place 
> in code where they occur.
> While it is understandable for some permissions to be used, requiring 
> {{FilePermission <> execute}} to simply initialize a class field 
> [Shell|https://github.com/apache/hadoop/blob/0fa54d45b1cf8a29f089f64d24f35bd221b4803f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java#L728]
>  which in the end is not used (since it's just a client) simply *compromises* 
> the entire security system.
> To make matters worse, the code is executed to initialize a field so in case 
> the permissions is not granted, the VM fails with {{InitializationError}} 
> which is unrecoverable.
> Ironically enough, on Windows this problem does not appear since the code 
> simply bypasses it and initializes the field with a fall back value 
> ({{false}}).
> A quick fix would be to simply take into account that the JVM 
> {{SecurityManager}} might be active and the permission not granted or that 
> the external process fails and use a fall back value.
> A proper and long-term fix would be to minimize the use of permissions for 
> hdfs client since it is simply not required. A client should be as light as 
> possible and not have the server requirements leaked onto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11687) Ignore x-emc-* headers when copying an Amazon S3 object

2016-03-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179928#comment-15179928
 ] 

Steve Loughran commented on HADOOP-11687:
-

replace 
{code}
if (source.getUserMetadata().isEmpty() != Boolean.TRUE) {
  java.util.Map smd = source.getUserMetadata();
  for (String key : smd.keySet()) {
ret.addUserMetadata(key, smd.get(key));
  }
}
{code}

with
{code}
  Map smd = source.getUserMetadata();
  for (String key : smd.keySet()) {
ret.addUserMetadata(key, smd.get(key));
  }
{code}

and rely on the loop not happening if keyset is empty

> Ignore x-emc-* headers when copying an Amazon S3 object
> ---
>
> Key: HADOOP-11687
> URL: https://issues.apache.org/jira/browse/HADOOP-11687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Denis Jannot
> Attachments: HADOOP-11687.001.patch
>
>
> The EMC ViPR/ECS object storage platform uses proprietary headers starting by 
> x-emc-* (like Amazon does with x-amz-*).
> Headers starting by x-emc-* should be included in the signature computation, 
> but it's not done by the Amazon S3 Java SDK (it's done by the EMC S3 SDK).
> When s3a copy an object it copies all the headers, but when the object 
> includes x-emc-* headers, it generates a signature mismatch.
> Removing the x-emc-* headers from the copy would allow s3a to be compatible 
> with the EMC ViPR/ECS object storage platform.
> Removing the x-* which aren't x-amz-* headers from the copy would allow s3a 
> to be compatible with any object storage platform which is using proprietary 
> headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179925#comment-15179925
 ] 

Hadoop QA commented on HADOOP-12860:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 49s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791464/HADOOP-12860.004.patch
 |
| JIRA Issue | HADOOP-12860 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 7650e46f2095 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cbd3132 |
| modules | C:  hadoop-common-project/hadoop-common   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site  U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8788/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Attachment: HADOOP-12860.004.patch

Rev04: Mention that enabling KMS over SSL and HttpFS over SSL requires 
different configurations.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch, HADOOP-12860.004.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12869:
-
Target Version/s: 3.0.0, 2.7.3, 2.6.5  (was: 3.0.0)

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch, HADOOP-12869.002.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream. But it return {{0}} 
> on {{read()}} in current implementation, it means the decrypted content is 
> {{0}} and it is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12869:
-
Affects Version/s: (was: 2.7.2)
   2.6.0
   2.7.0

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0, 2.7.0, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch, HADOOP-12869.002.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream. But it return {{0}} 
> on {{read()}} in current implementation, it means the decrypted content is 
> {{0}} and it is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-04 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179849#comment-15179849
 ] 

Sean Busbey commented on HADOOP-12869:
--

Please add a test that shows the issue.

The docs for the [read method we call 
claim|https://docs.oracle.com/javase/7/docs/api/java/io/FilterInputStream.html#read(byte[],%20int,%20int)]:

{code}
public int read(byte[] b,
   int off,
   int len)
 throws IOException

Reads up to len bytes of data from this input stream into an array of bytes. If 
len is not zero, the method blocks until some input is available; otherwise, no 
bytes are read and 0 is returned.
{code}

We pass in a len of 1. If FIS is blocking until "some" input is available, 
shouldn't that mean it has to have >= 1 byte available?

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch, HADOOP-12869.002.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream. But it return {{0}} 
> on {{read()}} in current implementation, it means the decrypted content is 
> {{0}} and it is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12879) Web authentication documentation is contradicting

2016-03-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179755#comment-15179755
 ] 

Wei-Chiu Chuang commented on HADOOP-12879:
--

I'm not totally sure how it works. Does it work like one set of parameters 
takes precedence over others? It would great if someone understand it can write 
better docs. A CDH cluster uses {{hadoop.http.authentication.kerberos.keytab}}, 
as far as I know.

> Web authentication documentation is contradicting
> -
>
> Key: HADOOP-12879
> URL: https://issues.apache.org/jira/browse/HADOOP-12879
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>  Labels: authentication, documentation, webui
>
> Hi, I noticed recently folks are really focusing on documentation. (Kudos to 
> +[~ajisakaa], +[~brahmareddy], +[~iwasakims], +[~daisuke.kobayashi], 
> +[~jzhuge] and more). 
> I've been following Hadoop documentation to set up web UI authentication, but 
> it seems to be contradicting between docs, for example, for web UI Kerberos 
> keytab settings:
> * in _Hadoop in Secure Mode_ (SecureMode.md), HDFS daemons have distinct web 
> UI authentication configs, for example, for name node that's 
> {{dfs.web.authentication.kerberos.keytab}},
> * but in _Authentication for Hadoop HTTP web-consoles_ 
> (HttpAuthentication.md), it mentions a Hadoop-wide config: 
> {{hadoop.http.authentication.kerberos.keytab}}; 
> * finally, in _Hadoop Auth, Java HTTP SPNEGO - Server Side 
> Configuration_(Configuration.md), it says{{\[PREFIX.\]kerberos.principal}} 
> should be used.
> They all seem to mean one thing, but different configuration names. So it's a 
> bit of confusing, or even contradicting to each other. Can we improve it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179735#comment-15179735
 ] 

Hadoop QA commented on HADOOP-12860:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791446/HADOOP-12860.003.patch
 |
| JIRA Issue | HADOOP-12860 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux aefeb893b8cd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cbd3132 |
| modules | C:  hadoop-common-project/hadoop-common   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site  U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8787/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11687) Ignore x-emc-* headers when copying an Amazon S3 object

2016-03-04 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179718#comment-15179718
 ] 

Thomas Demoor commented on HADOOP-11687:


This should also fix another bug. The current clone implementation also copies 
reponse headers (f.i. ETag, Accept-Ranges) from the source object to the new 
copy request. AWS can handle these superfluous headers but other 
implementations might not handle this well. 

Looks OK at fist glance, slight concerns:
* have we got everything? Will review in more detail later.
* we'll need to keep this up to date with every AWS version bump (check if they 
have added / changed something).

> Ignore x-emc-* headers when copying an Amazon S3 object
> ---
>
> Key: HADOOP-11687
> URL: https://issues.apache.org/jira/browse/HADOOP-11687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Denis Jannot
> Attachments: HADOOP-11687.001.patch
>
>
> The EMC ViPR/ECS object storage platform uses proprietary headers starting by 
> x-emc-* (like Amazon does with x-amz-*).
> Headers starting by x-emc-* should be included in the signature computation, 
> but it's not done by the Amazon S3 Java SDK (it's done by the EMC S3 SDK).
> When s3a copy an object it copies all the headers, but when the object 
> includes x-emc-* headers, it generates a signature mismatch.
> Removing the x-emc-* headers from the copy would allow s3a to be compatible 
> with the EMC ViPR/ECS object storage platform.
> Removing the x-* which aren't x-amz-* headers from the copy would allow s3a 
> to be compatible with any object storage platform which is using proprietary 
> headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179716#comment-15179716
 ] 

Wei-Chiu Chuang commented on HADOOP-12860:
--

BTW, the correct parameters to use in HA mode for NameNodes is 
_{{.clusterid.nnid}}_, while HDFS federation uses _{{.nnid}}_. Instead of 
causing extra confusion, I simply attach the links to the corresponding 
documentation.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Attachment: HADOOP-12860.003.patch

Thanks [~ajisakaa] for the feedback!
I checked {{yarn-default.xml}} as well as the implementation, TimelineServer 
does not respect {{HTTP_AND_HTTPS}}, so I updated the docs to correct it.

I also updated the SecureMode doc to point to NameNode HA and ResourceManager 
HA docs. Interestingly, if HDFS is in federation mode, there are different set 
of parameters (actually seems incompatible with HA mode). So I included the 
link to that doc as well.

[~ajisakaa] Please review again, thank you!

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch, 
> HADOOP-12860.003.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179610#comment-15179610
 ] 

Marco Zühlke commented on HADOOP-11792:
---

With this change we are relying on the correctness of the JIRA fixVersion 
field.I think this is the right move.

The [wiki|https://wiki.apache.org/hadoop/HowToCommit] has the following guide:
bq. Always set the "Fix Version" at this point, but please only set a single 
fix version, the earliest release in which the change will appear. Special 
case- when committing to a non-mainline branch (such as branch-0.22 or 
branch-0.23 ATM), please set fix-version to either 2.x.x or 3.x.x appropriately 
too. 

I my eyes this would (currently) for most issues mean:
* exactly one of "3.0.0", "2.9.0" or "2.8.0"
* in addition for backports to branch-2.7 add "2.7.3"
* in addition for backports to branch-2.6 add "2.6.5"

Exceptions are issues are are only relevant for a specific branch.

But when browsing over all fixed JIRAs you see wide variety:

https://issues.apache.org/jira/issues/?jql=project%20in%20%28YARN%2C%20HADOOP%2C%20HDFS%2C%20MAPREDUCE%29%20AND%20%20%20fixVersion%20in%283.0.0%2C2.9.0%2C2.8.0%2C2.7.0%2C2.7.1%2C2.7.2%2C2.7.3%2C2.6.0%2C2.6.1%2C2.6.2%2C2.6.3%2C2.6.4%2C2.6.5%29%20%20ORDER%20BY%20updated%20DESC

To just pick some:
* YARN-4344: 2.7.2, 2.6.3
* HDFS-9855:2.8.0, 2.9.0
* HADOOP-12841:  3.0.0, 2.9.0
* YARN-4722: 2.8.0, 2.7.3, 2.9.0, 2.6.5
* ...

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
> Fix For: 2.8.0
>
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179597#comment-15179597
 ] 

Hadoop QA commented on HADOOP-12101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 44s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791423/HADOOP-12101.009.patch
 |
| JIRA Issue | HADOOP-12101 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  

[jira] [Commented] (HADOOP-12887) RetryInvocationHandler failedRetry exception logging's level is not correct

2016-03-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179558#comment-15179558
 ] 

Hadoop QA commented on HADOOP-12887:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 52s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 12s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791417/HADOOP-12887.001.patch
 |
| JIRA Issue | HADOOP-12887 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux faad8b1a821c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |