[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-13 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240647#comment-15240647
 ] 

Jiajia Li commented on HADOOP-12911:


Thanks for your review.
1. Do you mean in the patch the TestKerberosAuthenticator class not extends 
from KerberosSecurityTestcase? In the new patch, I will keep it as original.
2. All the ApacheDS dependency will be removed, and we will use the Kerby's 
Keytab class. So the haddop-auth/pom.xml be changed.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Status: Patch Available  (was: In Progress)

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Attachment: HADOOP-12943.002.patch

Upload v2 patch, resolve checkstyle issues and updated documents.

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Status: In Progress  (was: Patch Available)

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12943.001.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240586#comment-15240586
 ] 

Akira AJISAKA commented on HADOOP-12989:


Thank you for taking this issue [~bwtakacy]. Would you add global timeout to 
other tests in the same directory as well?

> Some tests in org.apache.hadoop.fs.shell.find occasionally time out
> ---
>
> Key: HADOOP-12989
> URL: https://issues.apache.org/jira/browse/HADOOP-12989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Takashi Ohnishi
>  Labels: newbie
> Attachments: HADOOP-12989.1.patch
>
>
> An example:
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>   at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1872)
>   at java.lang.Runtime.loadLibrary0(Runtime.java:849)
>   at java.lang.System.loadLibrary(System.java:1088)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.NetworkInterface.(NetworkInterface.java:56)
>   at org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179)
>   at org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145)
>   at org.apache.htrace.core.TracerId.(TracerId.java:116)
>   at org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159)
>   at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2794)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
>   at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
>   at org.apache.hadoop.fs.shell.find.TestName.applyGlob(TestName.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240579#comment-15240579
 ] 

Hadoop QA commented on HADOOP-12989:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 51s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 12s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798640/HADOOP-12989.1.patch |
| JIRA Issue | HADOOP-12989 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19b128bb8e26 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27b131e |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-04-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240580#comment-15240580
 ] 

Xiaobing Zhou commented on HADOOP-12969:


There is no need to add unit tests since it's a change of mark. Could anyone 
help to commit it? Thanks.

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch, HADOOP-12969.001.patch, 
> HADOOP-12969.002.patch, HADOOP-12969.003.patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-04-13 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240534#comment-15240534
 ] 

Rui Li commented on HADOOP-12924:
-

All the failures either cannot be reproduced or fail on trunk as well. So I 
think they're not related to the patch.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-12924.1.patch, HADOOP-12924.2.patch, 
> HADOOP-12924.3.patch, HADOOP-12924.4.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240499#comment-15240499
 ] 

Takashi Ohnishi commented on HADOOP-12989:
--

Hi!

I want to try this.
I will submit a patch as suggested.

> Some tests in org.apache.hadoop.fs.shell.find occasionally time out
> ---
>
> Key: HADOOP-12989
> URL: https://issues.apache.org/jira/browse/HADOOP-12989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-12989.1.patch
>
>
> An example:
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>   at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1872)
>   at java.lang.Runtime.loadLibrary0(Runtime.java:849)
>   at java.lang.System.loadLibrary(System.java:1088)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.NetworkInterface.(NetworkInterface.java:56)
>   at org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179)
>   at org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145)
>   at org.apache.htrace.core.TracerId.(TracerId.java:116)
>   at org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159)
>   at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2794)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
>   at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
>   at org.apache.hadoop.fs.shell.find.TestName.applyGlob(TestName.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi reassigned HADOOP-12989:


Assignee: Takashi Ohnishi

> Some tests in org.apache.hadoop.fs.shell.find occasionally time out
> ---
>
> Key: HADOOP-12989
> URL: https://issues.apache.org/jira/browse/HADOOP-12989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Takashi Ohnishi
>  Labels: newbie
> Attachments: HADOOP-12989.1.patch
>
>
> An example:
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>   at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1872)
>   at java.lang.Runtime.loadLibrary0(Runtime.java:849)
>   at java.lang.System.loadLibrary(System.java:1088)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.NetworkInterface.(NetworkInterface.java:56)
>   at org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179)
>   at org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145)
>   at org.apache.htrace.core.TracerId.(TracerId.java:116)
>   at org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159)
>   at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2794)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
>   at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
>   at org.apache.hadoop.fs.shell.find.TestName.applyGlob(TestName.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HADOOP-12989:
-
Status: Patch Available  (was: Open)

> Some tests in org.apache.hadoop.fs.shell.find occasionally time out
> ---
>
> Key: HADOOP-12989
> URL: https://issues.apache.org/jira/browse/HADOOP-12989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Takashi Ohnishi
>  Labels: newbie
> Attachments: HADOOP-12989.1.patch
>
>
> An example:
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>   at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1872)
>   at java.lang.Runtime.loadLibrary0(Runtime.java:849)
>   at java.lang.System.loadLibrary(System.java:1088)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.NetworkInterface.(NetworkInterface.java:56)
>   at org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179)
>   at org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145)
>   at org.apache.htrace.core.TracerId.(TracerId.java:116)
>   at org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159)
>   at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2794)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
>   at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
>   at org.apache.hadoop.fs.shell.find.TestName.applyGlob(TestName.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12989) Some tests in org.apache.hadoop.fs.shell.find occasionally time out

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HADOOP-12989:
-
Attachment: HADOOP-12989.1.patch

> Some tests in org.apache.hadoop.fs.shell.find occasionally time out
> ---
>
> Key: HADOOP-12989
> URL: https://issues.apache.org/jira/browse/HADOOP-12989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Takashi Ohnishi
>  Labels: newbie
> Attachments: HADOOP-12989.1.patch
>
>
> An example:
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>   at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1872)
>   at java.lang.Runtime.loadLibrary0(Runtime.java:849)
>   at java.lang.System.loadLibrary(System.java:1088)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
>   at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.NetworkInterface.(NetworkInterface.java:56)
>   at org.apache.htrace.core.TracerId.getBestIpString(TracerId.java:179)
>   at org.apache.htrace.core.TracerId.processShellVar(TracerId.java:145)
>   at org.apache.htrace.core.TracerId.(TracerId.java:116)
>   at org.apache.htrace.core.Tracer$Builder.build(Tracer.java:159)
>   at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2794)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2837)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2819)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
>   at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
>   at org.apache.hadoop.fs.shell.find.TestName.applyGlob(TestName.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13024) Distcp with -delete feature on raw data not implemented

2016-04-13 Thread Mavin Martin (JIRA)
Mavin Martin created HADOOP-13024:
-

 Summary: Distcp with -delete feature on raw data not implemented
 Key: HADOOP-13024
 URL: https://issues.apache.org/jira/browse/HADOOP-13024
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mavin Martin


When doing distcp of raw data using -delete feature, following bug appears.
{code}
[root@xxx bin]# hadoop distcp -delete -update /.reserved/raw/tmp/a 
/.reserved/raw/tmp/b
16/04/14 02:54:01 ERROR tools.DistCp: Exception encountered
java.io.IOException: DistCp failure: Job job_xxx has failed: Job commit failed: 
org.apache.hadoop.tools.CopyListing$InvalidInputException: The source path 
'hdfs://nn/.reserved/raw/tmp/b' starts with /.reserved/raw but the target path 
'hdfs://nn/NONE' does not. Either all or none of the paths must have this 
prefix.
at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:141)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
at 
org.apache.hadoop.tools.mapred.CopyCommitter.deleteMissing(CopyCommitter.java:244)
at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:94)
at 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
at 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at org.apache.hadoop.tools.DistCp.execute(DistCp.java:187)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
{code}

The issue is not with the distributed copy, the issue is when it tries to 
delete things in the target that no longer exist in the source, it revalidates 
to make sure NONE is in the /.reserved/raw domain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HADOOP-13023:
--
Description: 
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[root@xxx bin]# hdfs crypto -listZones
/tmp/a/tedDEF00013
[root@xxx bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:22 /tmp/a
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:00 /tmp/a/ted
-rw-r--r--   3 xxx xxx 33 2016-04-14 00:00 /tmp/a/ted/test.txt
[root@xxx bin]# hadoop distcp -update /.reserved/raw/tmp/a/ted 
/.reserved/raw/tmp/a-with-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/a/tedDEF00013
[root@xxx bin]# hadoop distcp /.reserved/raw/tmp/a/ted 
/.reserved/raw/tmp/a-no-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/a/tedDEF00013
/tmp/a-no-update/ted  DEF00013
{code}

The crypto zone for 'a-with-update' should have been created since this is a 
new destination.  You can verify this by looking at 'a-no-update'.

  was:
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[root@xxx bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:22 /tmp/gms
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:00 /tmp/gms/ted
-rw-r--r--   3 xxx xxx 33 2016-04-14 00:00 /tmp/gms/ted/test.txt
[root@xxx bin]# hadoop distcp -update /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-with-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[root@xxx bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for 'gms-with-update' should have been created since this is a 
new destination.  You can verify this by looking at 'gms-no-update'.


> Distcp with -update feature on first time raw data not working
> --
>
> Key: HADOOP-13023
> URL: https://issues.apache.org/jira/browse/HADOOP-13023
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>
> When attempting to do a distcp with the -update feature toggled on encrypted 
> data, the distcp shows as successful.  Reading the encrypted file on the 
> target_path does not work since the keyName does not exist.  
> Please see my example to reproduce the issue.
> {code}
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/a/tedDEF00013
> [root@xxx bin]# hdfs dfs -ls -R /tmp
> drwxr-xr-x   - xxx xxx  0 2016-04-14 00:22 /tmp/a
> drwxr-xr-x   - xxx xxx  0 2016-04-14 00:00 /tmp/a/ted
> -rw-r--r--   3 xxx xxx 33 2016-04-14 00:00 /tmp/a/ted/test.txt
> [root@xxx bin]# hadoop distcp -update /.reserved/raw/tmp/a/ted 
> /.reserved/raw/tmp/a-with-update/ted
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/a/tedDEF00013
> [root@xxx bin]# hadoop distcp /.reserved/raw/tmp/a/ted 
> /.reserved/raw/tmp/a-no-update/ted
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/a/tedDEF00013
> /tmp/a-no-update/ted  DEF00013
> {code}
> The crypto zone for 'a-with-update' should have been created since this is a 
> new destination.  You can verify this by looking at 'a-no-update'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240476#comment-15240476
 ] 

Hudson commented on HADOOP-12822:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9608 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9608/])
HADOOP-12822. Change "Metrics intern cache overflow" log level from WARN 
(aajisaka: rev 27b131e79c5fa99de3ed4fb529d854dd5da55bde)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/Interns.java


> Change "Metrics intern cache overflow" log level from WARN to INFO
> --
>
> Key: HADOOP-12822
> URL: https://issues.apache.org/jira/browse/HADOOP-12822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12822.patch, HADOOP-12822_2.patch
>
>
> Interns.java outputs "Metrics intern cache over flow" warn log for metrics 
> info/tag when the cache reaches the hard-coded limit and the oldest cache is 
> discarded for the first time. I'm thinking this log level can be changed to 
> info because
> * there is no problem when the oldest cache is removed. if the metrics 
> info/tag is not in the cache, simply create it.
> * we cannot configure the maximum size of the cache, so there is no way to 
> stop the warn log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12822:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2.8 and above. Thanks [~boky01] for the contribution!

> Change "Metrics intern cache overflow" log level from WARN to INFO
> --
>
> Key: HADOOP-12822
> URL: https://issues.apache.org/jira/browse/HADOOP-12822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12822.patch, HADOOP-12822_2.patch
>
>
> Interns.java outputs "Metrics intern cache over flow" warn log for metrics 
> info/tag when the cache reaches the hard-coded limit and the oldest cache is 
> discarded for the first time. I'm thinking this log level can be changed to 
> info because
> * there is no problem when the oldest cache is removed. if the metrics 
> info/tag is not in the cache, simply create it.
> * we cannot configure the maximum size of the cache, so there is no way to 
> stop the warn log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240425#comment-15240425
 ] 

Akira AJISAKA commented on HADOOP-12822:


LGTM, +1.
bq. I am just curious, after migration will log4j remain under SLF4J?
Yes. SLF4J is using LOG4J via slf4j-log4j12.

> Change "Metrics intern cache overflow" log level from WARN to INFO
> --
>
> Key: HADOOP-12822
> URL: https://issues.apache.org/jira/browse/HADOOP-12822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12822.patch, HADOOP-12822_2.patch
>
>
> Interns.java outputs "Metrics intern cache over flow" warn log for metrics 
> info/tag when the cache reaches the hard-coded limit and the oldest cache is 
> discarded for the first time. I'm thinking this log level can be changed to 
> info because
> * there is no problem when the oldest cache is removed. if the metrics 
> info/tag is not in the cache, simply create it.
> * we cannot configure the maximum size of the cache, so there is no way to 
> stop the warn log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240416#comment-15240416
 ] 

Hadoop QA commented on HADOOP-12563:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
56s {color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s 
{color} | {color:red} root: patch generated 1 new + 6 unchanged - 27 fixed = 7 
total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 4s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 20s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 57s 

[jira] [Updated] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HADOOP-13023:
--
Description: 
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[root@xxx bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:22 /tmp/gms
drwxr-xr-x   - xxx xxx  0 2016-04-14 00:00 /tmp/gms/ted
-rw-r--r--   3 xxx xxx 33 2016-04-14 00:00 /tmp/gms/ted/test.txt
[root@xxx bin]# hadoop distcp -update /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-with-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[root@xxx bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[root@xxx bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for 'gms-with-update' should have been created since this is a 
new destination.  You can verify this by looking at 'gms-no-update'.

  was:
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for 'gms-with-update' should have been created since this is a 
new destination.  You can verify this by looking at 'gms-no-update'.


> Distcp with -update feature on first time raw data not working
> --
>
> Key: HADOOP-13023
> URL: https://issues.apache.org/jira/browse/HADOOP-13023
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>
> When attempting to do a distcp with the -update feature toggled on encrypted 
> data, the distcp shows as successful.  Reading the encrypted file on the 
> target_path does not work since the keyName does not exist.  
> Please see my example to reproduce the issue.
> {code}
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [root@xxx bin]# hdfs dfs -ls -R /tmp
> drwxr-xr-x   - xxx xxx  0 2016-04-14 00:22 /tmp/gms
> drwxr-xr-x   - xxx xxx  0 2016-04-14 00:00 /tmp/gms/ted
> -rw-r--r--   3 xxx xxx 33 2016-04-14 00:00 /tmp/gms/ted/test.txt
> [root@xxx bin]# hadoop distcp -update /.reserved/raw/tmp/gms/ted 
> /.reserved/raw/tmp/gms-with-update/ted
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [root@xxx bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
> /.reserved/raw/tmp/gms-no-update/ted
> [root@xxx bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> /tmp/gms-no-update/ted  DEF00013
> {code}
> The crypto zone for 'gms-with-update' should have been created since this is 
> a new destination.  You can verify this by looking at 'gms-no-update'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HADOOP-13023:
--
Affects Version/s: 2.6.0

> Distcp with -update feature on first time raw data not working
> --
>
> Key: HADOOP-13023
> URL: https://issues.apache.org/jira/browse/HADOOP-13023
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>
> When attempting to do a distcp with the -update feature toggled on encrypted 
> data, the distcp shows as successful.  Reading the encrypted file on the 
> target_path does not work since the keyName does not exist.  
> Please see my example to reproduce the issue.
> {code}
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:22 /tmp/gms
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:00 /tmp/gms/ted
> -rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 
> 00:00 /tmp/gms/ted/test.txt
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-no-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> /tmp/gms-no-update/ted  DEF00013
> {code}
> The crypto zone for 'gms-with-update' should have been created since this is 
> a new destination.  You can verify this by looking at 'gms-no-update'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HADOOP-13023:
--
Description: 
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for gms-with-update should have been created since this is a 
new destination.  You can verify this by looking at gms-no-update.

  was:
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms2/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for gms2 should have been created since this is a new 
destination.  You can verify this by looking at gms-no-update.


> Distcp with -update feature on first time raw data not working
> --
>
> Key: HADOOP-13023
> URL: https://issues.apache.org/jira/browse/HADOOP-13023
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mavin Martin
>
> When attempting to do a distcp with the -update feature toggled on encrypted 
> data, the distcp shows as successful.  Reading the encrypted file on the 
> target_path does not work since the keyName does not exist.  
> Please see my example to reproduce the issue.
> {code}
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:22 /tmp/gms
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:00 /tmp/gms/ted
> -rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 
> 00:00 /tmp/gms/ted/test.txt
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-no-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> /tmp/gms-no-update/ted  DEF00013
> {code}
> The crypto zone for gms-with-update should have been created since this is a 
> new destination.  You can verify this by looking at gms-no-update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mavin Martin updated HADOOP-13023:
--
Description: 
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for 'gms-with-update' should have been created since this is a 
new destination.  You can verify this by looking at 'gms-no-update'.

  was:
When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for gms-with-update should have been created since this is a 
new destination.  You can verify this by looking at gms-no-update.


> Distcp with -update feature on first time raw data not working
> --
>
> Key: HADOOP-13023
> URL: https://issues.apache.org/jira/browse/HADOOP-13023
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mavin Martin
>
> When attempting to do a distcp with the -update feature toggled on encrypted 
> data, the distcp shows as successful.  Reading the encrypted file on the 
> target_path does not work since the keyName does not exist.  
> Please see my example to reproduce the issue.
> {code}
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:22 /tmp/gms
> drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 
> 00:00 /tmp/gms/ted
> -rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 
> 00:00 /tmp/gms/ted/test.txt
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-with-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp 
> /.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms-no-update/ted
> [r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
> /tmp/gms/tedDEF00013
> /tmp/gms-no-update/ted  DEF00013
> {code}
> The crypto zone for 'gms-with-update' should have been created since this is 
> a new destination.  You can verify this by looking at 'gms-no-update'.



--
This message was sent by Atlassian 

[jira] [Created] (HADOOP-13023) Distcp with -update feature on first time raw data not working

2016-04-13 Thread Mavin Martin (JIRA)
Mavin Martin created HADOOP-13023:
-

 Summary: Distcp with -update feature on first time raw data not 
working
 Key: HADOOP-13023
 URL: https://issues.apache.org/jira/browse/HADOOP-13023
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mavin Martin


When attempting to do a distcp with the -update feature toggled on encrypted 
data, the distcp shows as successful.  Reading the encrypted file on the 
target_path does not work since the keyName does not exist.  

Please see my example to reproduce the issue.

{code}
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs dfs -ls -R /tmp
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:22 
/tmp/gms
drwxr-xr-x   - WD5-SVT.gmspr0022 WD5-SVT.gmspr0022  0 2016-04-14 00:00 
/tmp/gms/ted
-rw-r--r--   3 WD5-SVT.gmspr0022 WD5-SVT.gmspr0022 33 2016-04-14 00:00 
/tmp/gms/ted/test.txt
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp -update 
/.reserved/raw/tmp/gms/ted /.reserved/raw/tmp/gms2/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hadoop distcp /.reserved/raw/tmp/gms/ted 
/.reserved/raw/tmp/gms-no-update/ted
[r...@769wl02.b13.az2.eng.pdx.wd bin]# hdfs crypto -listZones
/tmp/gms/tedDEF00013
/tmp/gms-no-update/ted  DEF00013
{code}

The crypto zone for gms2 should have been created since this is a new 
destination.  You can verify this by looking at gms-no-update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240301#comment-15240301
 ] 

Hadoop QA commented on HADOOP-12974:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 40s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 49s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 10s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798595/HADOOP-12974v2.patch |
| JIRA Issue | HADOOP-12974 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux db2f68508288 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (HADOOP-12993) Change ShutdownHookManger complete shutdown log from INFO to DEBUG

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240246#comment-15240246
 ] 

Hudson commented on HADOOP-12993:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9605/])
HADOOP-12993. Change ShutdownHookManger complete shutdown log from INFO (xyao: 
rev 8ced42daff5cd0cb11d26042ae8c8255ef629a40)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ShutdownHookManager.java


> Change ShutdownHookManger complete shutdown log from INFO to DEBUG 
> ---
>
> Key: HADOOP-12993
> URL: https://issues.apache.org/jira/browse/HADOOP-12993
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12993.00.patch
>
>
> "INFO util.ShutdownHookManager: ShutdownHookManger complete shutdown." should 
> be "DEBUG".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption with AWS and default key is used

2016-04-13 Thread Leonardo Contreras (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240199#comment-15240199
 ] 

Leonardo Contreras commented on HADOOP-13022:
-

Looks like uploading aws-sdk client to 1.10.41+ fixes the issue

> S3 MD5 check fails on Server Side Encryption with AWS and default key is used
> -
>
> Key: HADOOP-13022
> URL: https://issues.apache.org/jira/browse/HADOOP-13022
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Leonardo Contreras
>
> When server side encryption with "aws:kms" value and no custom key is used in 
> S3A Filesystem, the AWSClient fails when verifing Md5:
> {noformat}
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> verify integrity of data upload.  Client calculated content hash (contentMD5: 
> 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
> c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may 
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
> md5DigestStream: 
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
> bucketName: abuse-messages-nonprod, key: 
> venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
>   at scala.Option.map(Option.scala:145)
>   at 
> org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
>   at 
> org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240196#comment-15240196
 ] 

Mike Yoder commented on HADOOP-12942:
-

So it's not just the absolute number of checkstyle violations, it knows which 
ones were yours. Ow!

Regarding the latest patch... it differs in only 4 whitespace characters from 
the previous patch, which did pass the unit tests.  The 
hadoop.security.ssl.TestReloadingX509TrustManager failure passes for me; looks 
unrelated.


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Attachment: HADOOP-12974v2.patch

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240111#comment-15240111
 ] 

Hadoop QA commented on HADOOP-12942:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 38 unchanged - 70 fixed = 38 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 11s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 26s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.net.TestClusterTopology |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
| JDK v1.7.0_95 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798569/HADOOP-12942.004.patch
 |
| JIRA Issue | HADOOP-12942 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f8c7ca5a3aba 3.13.0-36-lowlatency 

[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240096#comment-15240096
 ] 

Larry McCay commented on HADOOP-12942:
--

Hi [~yoderme] - thanks for the new patch.
I will try and review it tonight or tomorrow.

Looks like you got flagged for adding a new checkstyle violation - even though 
you fixed 70. :)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption with AWS and default key is used

2016-04-13 Thread Leonardo Contreras (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonardo Contreras updated HADOOP-13022:

Affects Version/s: (was: 2.6.4)
   2.8.0

> S3 MD5 check fails on Server Side Encryption with AWS and default key is used
> -
>
> Key: HADOOP-13022
> URL: https://issues.apache.org/jira/browse/HADOOP-13022
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Leonardo Contreras
>
> When server side encryption with "aws:kms" value and no custom key is used in 
> S3A Filesystem, the AWSClient fails when verifing Md5:
> {noformat}
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> verify integrity of data upload.  Client calculated content hash (contentMD5: 
> 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
> c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may 
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
> md5DigestStream: 
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
> bucketName: abuse-messages-nonprod, key: 
> venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
>   at scala.Option.map(Option.scala:145)
>   at 
> org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
>   at 
> org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption with AWS and default key is used

2016-04-13 Thread Leonardo Contreras (JIRA)
Leonardo Contreras created HADOOP-13022:
---

 Summary: S3 MD5 check fails on Server Side Encryption with AWS and 
default key is used
 Key: HADOOP-13022
 URL: https://issues.apache.org/jira/browse/HADOOP-13022
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.4
Reporter: Leonardo Contreras


When server side encryption with "aws:kms" value and no custom key is used in 
S3A Filesystem, the AWSClient fails when verifing Md5:
{noformat}
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
verify integrity of data upload.  Client calculated content hash (contentMD5: 
1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may need 
to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
md5DigestStream: 
com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
bucketName: abuse-messages-nonprod, key: 
venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
at 
org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
at 
org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
at scala.Option.map(Option.scala:145)
at 
org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
at 
org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240011#comment-15240011
 ] 

Hadoop QA commented on HADOOP-12974:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 26s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798556/HADOOP-12974v1.patch |
| JIRA Issue | HADOOP-12974 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 22cef1e235ea 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.004.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-04-13 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-13 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: HADOOP-12563.11.patch

use long form of ByteString.copyFrom to protect against
too-long buffers in io.Text objects' getBytes().

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, dtutil-test-out, dtutil_diff_07_08, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Affects Version/s: 2.9.0
   Status: Patch Available  (was: Open)

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Attachment: HADOOP-12974v1.patch

Patch that creates a CachingGetSpaceUsed implementation that uses DF rather 
than DU to get the used space. I still need to get some more documentation and 
a test on this.

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-13 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12974:
---
Summary: Create a CachingGetSpaceUsed implementation that uses df  (was: 
Create a DU implementation that uses df)

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239883#comment-15239883
 ] 

Hadoop QA commented on HADOOP-5470:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 8s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Exceptional return value of java.io.File.setLastModified(long) ignored in 
org.apache.hadoop.util.RunJar.unJar(File, File, Pattern)  At 
RunJar.java:ignored in org.apache.hadoop.util.RunJar.unJar(File, File, Pattern) 
 At RunJar.java:[line 108] |
| JDK v1.8.0_77 Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.metrics2.impl.TestGangliaMetrics |
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239875#comment-15239875
 ] 

Elliott Clark commented on HADOOP-12973:


Thanks for all the reviews [~cmccabe]

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.8.0
>
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v11.patch, HADOOP-12973v12.patch, 
> HADOOP-12973v13.patch, HADOOP-12973v2.patch, HADOOP-12973v3.patch, 
> HADOOP-12973v5.patch, HADOOP-12973v6.patch, HADOOP-12973v7.patch, 
> HADOOP-12973v8.patch, HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-13 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239827#comment-15239827
 ] 

Matthew Paduano commented on HADOOP-12563:
--

One of those test failures (TestGenericOptionsParser) is caused by a change in 
patch 10.
I am attaching patch 11 to fix that problem.

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> dtutil-test-out, dtutil_diff_07_08, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12619) Native memory leaks in CompressorStream

2016-04-13 Thread Jeff Faust (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239818#comment-15239818
 ] 

Jeff Faust commented on HADOOP-12619:
-

Observed this behavior in 2.7.1 with 
org.apache.hadoop.io.compress.BZip2Codec::createOutputStream(OutputStream) - 
the process would eventually exhaust the native memory on the host and crash.  
Here's the relevant code:
{code}
@Override
  public CompressionOutputStream createOutputStream(OutputStream out)
  throws IOException {
return CompressionCodec.Util.
createOutputStreamWithCodecPool(this, conf, out);
  }
{code}
createOutputStreamWithCodecPool gets a compressor from the CodecPool, calls 
codec.createOutputStream (out, compressor), and then calls 
CompressionOutputStream::setTrackedCompressor(compressor) so that the 
compressor can be cleaned up later by the CompressionOutputStream  
{code}
static CompressionOutputStream createOutputStreamWithCodecPool(
CompressionCodec codec, Configuration conf, OutputStream out)
throws IOException {
  Compressor compressor = CodecPool.getCompressor(codec, conf);
  CompressionOutputStream stream = null;
  try {
stream = codec.createOutputStream(out, compressor);
  } finally {
if (stream == null) {
  CodecPool.returnCompressor(compressor);
} else {
  stream.setTrackedCompressor(compressor);
}
  }
  return stream;
}
{code}
CompressionOutputStream has a private trackedCompressor attribute that it 
returns to the CodecPool on close():
{code}
   private Compressor trackedCompressor;
   
   void setTrackedCompressor(Compressor compressor) {
trackedCompressor = compressor;
  }

@Override
  public void close() throws IOException {
finish();
out.close();
if (trackedCompressor != null) {
  CodecPool.returnCompressor(trackedCompressor);
  trackedCompressor = null;
}
  }
{code}
This would be great, but when 
CompressionCodec.Util.createOutputStreamWithCodecPool calls 
codec.createOutputStream(out, compressor), the BZip2Codec never actually 
creates a CompressionOutputStream.  It creates one of two subclasses:
{code}
 @Override
  public CompressionOutputStream createOutputStream(OutputStream out,
  Compressor compressor) throws IOException {
return Bzip2Factory.isNativeBzip2Loaded(conf) ?
  new CompressorStream(out, compressor, 
   conf.getInt("io.file.buffer.size", 4*1024)) :
  new BZip2CompressionOutputStream(out);
  }
{code}
Each of these subclasses (CompressorStream and BZip2CompressionOutputStream) in 
turn overrides the close() method, and it will be one of these two 
implementations that will be called when the returned stream is closed.  
Neither implementation returns the compressor to the pool, so every time you 
ask the CodecPool for a compressor it creates a new one, allocating more native 
memory.  

One workaround is to deal directly with the CodecPool, and use the 
BZip2Codec::createOutputStream method that takes a compressor as a second 
argument - and of course to return the compressor to the CodecPool yourself as 
soon as you're finished with it.

> Native memory leaks in CompressorStream
> ---
>
> Key: HADOOP-12619
> URL: https://issues.apache.org/jira/browse/HADOOP-12619
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: wangchao
>
> The constructor of org.apache.hadoop.io.compress.CompressorStream requires an 
> org.apache.hadoop.io.compress.Compressor  object to compress bytes but it 
> does not invoke the compressor's finish method when close method are called. 
> This may causes the native memory leaks if the compressor is only used by 
> this CompressorStream object.
> I found this when set up a flume agent with gzip compression, the native 
> memory grows slowly and cannot fall back. 
> {code}
>   @Override
>   public CompressionOutputStream createOutputStream(OutputStream out) 
> throws IOException {
> return (ZlibFactory.isNativeZlibLoaded(conf)) ?
>new CompressorStream(out, createCompressor(),
> conf.getInt("io.file.buffer.size", 
> 4*1024)) :
>new GzipOutputStream(out);
>   }
>   @Override
>   public Compressor createCompressor() {
> return (ZlibFactory.isNativeZlibLoaded(conf))
>   ? new GzipZlibCompressor(conf)
>   : null;
>   }
> {code}
> The method of CompressorStream is
> {code}
>   @Override
>   public void close() throws IOException {
> if (!closed) {
>   finish();
>   out.close();
>   closed = true;
> }
>   }
>   @Override
>   public void finish() throws IOException {
> if (!compressor.finished()) {
>   compressor.finish();
>   while (!compressor.finished()) {
> compress();
> 

[jira] [Updated] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics

2016-04-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10642:

Description: 
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}

Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java

  was:
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}

Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.


One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java


> Provide option to limit heap memory consumed by dynamic metrics2 metrics
> 
>
> Key: HADOOP-10642
> URL: https://issues.apache.org/jira/browse/HADOOP-10642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Ted Yu
>
> User sunweiei provided the following jmap output in HBase 0.96 deployment:
> {code}
>  num #instances #bytes  class name
> --
>1:  14917882 3396492464  [C
>2:   1996994 2118021808  [B
>3:  43341650 1733666000  java.util.LinkedHashMap$Entry
>4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
>5:  14446577  924580928  
> org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
> {code}
> Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
> due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
> metrics2/lib/MetricsRegistry.java.
> This scenario would arise when large number of regions are tracked through 
> metrics2 dynamically.
> Interns class doesn't provide API to remove entries in its internal Map.
> One solution is to provide an option that allows skipping calls to 
> Interns.info() in metrics2/lib/MetricsRegistry.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-04-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12811:
---
Attachment: HADOOP-12811.01.patch

HDFS-9427 seems to be close to commit. I'm attaching a patch for review. KMS 
port is changed from 16000 to 9600. Thanks.

I picked 9600 because it's in the same range as HDFS-9427. I also searched 
apache github, 9600 seems to not being used by other applications. (except 
[juddi|https://juddi.apache.org/], which I feel should be Okay.)

Also searched across projects for 16000, and found impala should update [some 
files|https://github.com/apache/incubator-impala/search?utf8=%E2%9C%93=16000] 
due to this change. But that should be implicit by the 'incompatible' flag 
anyway. :)

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>Assignee: Xiao Chen
>  Labels: incompatible, patch
> Attachments: HADOOP-12811.01.patch
>
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12811) Change kms server port number which conflicts with HMaster port number

2016-04-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12811:
---
Hadoop Flags: Incompatible change

> Change kms server port number which conflicts with HMaster port number
> --
>
> Key: HADOOP-12811
> URL: https://issues.apache.org/jira/browse/HADOOP-12811
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3
>Reporter: Yufeng Jiang
>Assignee: Xiao Chen
>  Labels: incompatible, patch
> Attachments: HADOOP-12811.01.patch
>
>
> The HBase's HMaster port number conflicts with Hadoop kms port number. Both 
> uses 16000.
> There might be use cases user need kms and HBase present on the same cluster. 
> The HBase is able to encrypt its HFiles but user might need KMS to encrypt 
> other HDFS directories.
> Users would have to manually override the default port of either application 
> on their cluster. It would be nice to have different default ports so kms and 
> HBase could naturally coexist. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239686#comment-15239686
 ] 

Hadoop QA commented on HADOOP-12924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 41s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| 

[jira] [Assigned] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-5470:


Assignee: Andras Bokor

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470.01.patch
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5470:
-
Attachment: HADOOP-5470.01.patch

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470.01.patch
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5470:
-
Status: Patch Available  (was: Open)

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.19.1, 0.19.0, 0.18.3, 0.18.2, 0.18.1, 0.18.0
>Reporter: Colin Evans
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470.01.patch
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5470:
-
Attachment: (was: HADOOP-5470)

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Priority: Minor
>  Labels: newbie
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13020) dfs -ls s3a root should not return error when bucket is empty

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13020:

Labels: s3  (was: )

> dfs -ls s3a root should not return error when bucket is empty
> -
>
> Key: HADOOP-13020
> URL: https://issues.apache.org/jira/browse/HADOOP-13020
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>  Labels: s3
>
> Do not expect {{hdfs dfs -ls}} s3a root to return error "No such file or 
> directory" when the s3 bucket is empty. Expect no error and empty output, 
> just like listing an empty directory.
> {code}
> $ hdfs dfs -ls s3a://jz-hdfs1/
> Found 1 items
> drwxrwxrwx   -  0 1969-12-31 16:00 s3a://jz-hdfs1/tmp
> $ hdfs dfs -rmdir s3a://jz-hdfs1/tmp
> $ hdfs dfs -ls s3a://jz-hdfs1/
> ls: `s3a://jz-hdfs1/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-13020) dfs -ls s3a root should not return error when bucket is empty

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13020.
-
Resolution: Duplicate

Sorry, missed it in searching for duplicate. Was wondering why nobody reported 
such a simple case. Add component {{fs/s3}} to HADOOP-11918 to help duplicate 
search.

> dfs -ls s3a root should not return error when bucket is empty
> -
>
> Key: HADOOP-13020
> URL: https://issues.apache.org/jira/browse/HADOOP-13020
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Do not expect {{hdfs dfs -ls}} s3a root to return error "No such file or 
> directory" when the s3 bucket is empty. Expect no error and empty output, 
> just like listing an empty directory.
> {code}
> $ hdfs dfs -ls s3a://jz-hdfs1/
> Found 1 items
> drwxrwxrwx   -  0 1969-12-31 16:00 s3a://jz-hdfs1/tmp
> $ hdfs dfs -rmdir s3a://jz-hdfs1/tmp
> $ hdfs dfs -ls s3a://jz-hdfs1/
> ls: `s3a://jz-hdfs1/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-11918:

Component/s: fs/s3

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Fix For: 2.8.0
>
> Attachments: HADOOP-11918-002.patch, HADOOP-11918-003.patch, 
> HADOOP-11918.000.patch, HADOOP-11918.001.patch, HADOOP-11918.004.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11786) Fix Javadoc typos in org.apache.hadoop.fs.FileSystem

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-11786:
--
Attachment: HADOOP-11786.patch

[~airbots] I have a patch for this class. Could you please check it? (Most of 
the supplied tag descriptions was copied from subclasses)

> Fix Javadoc typos in org.apache.hadoop.fs.FileSystem
> 
>
> Key: HADOOP-11786
> URL: https://issues.apache.org/jira/browse/HADOOP-11786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Chen He
>Assignee: Yanjun Wang
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11786.patch
>
>
> /**
>  * Resets all statistics to 0.
>  *
>  * In order to reset, we add up all the thread-local statistics data, and
>  * set rootData to the negative of that.
>  *
>  * This may seem like a counterintuitive way to reset the statsitics.  Why
>  * can't we just zero out all the thread-local data?  Well, thread-local
>  * data can only be modified by the thread that owns it.  If we tried to
>  * modify the thread-local data from this thread, our modification might 
> get
>  * interleaved with a read-modify-write operation done by the thread that
>  * owns the data.  That would result in our update getting lost.
>  *
>  * The approach used here avoids this problem because it only ever reads
>  * (not writes) the thread-local data.  Both reads and writes to rootData
>  * are done under the lock, so we're free to modify rootData from any 
> thread
>  * that holds the lock.
>  */
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12924) Add default coder key for creating raw coders

2016-04-13 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12924:

Attachment: HADOOP-12924.4.patch

Update patch based on my offline discussion with Kai:
1. Add assertion that conf != null when creating the raw coders. And changes 
some tests accordingly.
2. Add getCodecName to ErasureCodingPolicy.
3. If the codec name is unknown, consider it a custom codec and get a raw coder 
config key for it. Throw an exception if such a key is not configured. 
Otherwise create the raw coder using the configured raw coder factory.
4. Add a test to test the codec to raw coder mapping.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-12924.1.patch, HADOOP-12924.2.patch, 
> HADOOP-12924.3.patch, HADOOP-12924.4.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-13 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5470:
-
Attachment: HADOOP-5470

[~colinhevans]
Could you please check my patch? I created some JUnit tests as test evidence. 
Thanks.

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13018) Make Kdiag fail fast if hadoop.token.files points to non-existent file

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239282#comment-15239282
 ] 

Steve Loughran commented on HADOOP-13018:
-

this should target whichever hadoop version added hadoop.token.files

> Make Kdiag fail fast if hadoop.token.files points to non-existent file
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239280#comment-15239280
 ] 

Steve Loughran commented on HADOOP-13017:
-

There's a test for this in the test seek contract test, but an explicit one can 
be added to another contract test, as it isn't directly seek related; it just 
went in as part of the positioned readable

> Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
> -
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-13 Thread Bolke de Bruin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239077#comment-15239077
 ] 

Bolke de Bruin commented on HADOOP-12751:
-

Ah. I like that approach. I will cook something up, hopefully today

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239025#comment-15239025
 ] 

Steve Loughran commented on HADOOP-12911:
-

Quick patch review

# KerberosSecurityTestcase should be retained/updated; tests shouldn't need to 
move off it
# does {{hadoop-auth/pom.xml}} need changing for this patch? 

Overally, I'm thinking this is significant enough it has to be a 3.0 change; 
not so much for the minikdc itself but the whole migration off the existing 
code and changing dependencies.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239021#comment-15239021
 ] 

Steve Loughran commented on HADOOP-12911:
-

MiniKDC does sometimes get used,  but it's been weak and not ideal. Limited 
support for protocols, a pain to set up, and I could never get it to issue 
tickets for >1 person in the same JVM (though that was probably UGI's static 
initializers there).

I don't know how much use of MiniKDC there is outside; 
[mvnrepo|http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-minikdc/usages]
 says: Kafka. HBase, Accumulo ... these are all people we can talk to.

This is the evolution of classic MiniKDC: it will have to move on, what needs 
to be done is do it carefully and with users of the module happy.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239016#comment-15239016
 ] 

Steve Loughran commented on HADOOP-12751:
-

# we have to leave the auth code in hadoop-auth; things downstream sometimes 
import that specific JAR and expect kerberos to be there. (I don't know why the 
auth stuff isn't in hadoop-common; that's an inconvenience and a mystery)
# and we can't move Configuration, not when it triggers the loading of 
core-default and core-site XML, which would have to be in too, etc, etc.

Here's an alternate proposal.

# the logic to pattern check is retained, the check made
#  but it's downgraded to a log@info. People can even edit log4j to make 
that go away
# kdiag is extended to do the pattern check, add an option to fail if the 
username considered invalid

This way: no need to do config of the client, some information gets published 
to explain why things aren't working, and KDiag does the full checking

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13021) Hadoop swift driver unit test should use unique directory for each run

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238978#comment-15238978
 ] 

Steve Loughran commented on HADOOP-13021:
-

You shouldn't be editing core-site.xml: if that is where you are putting login 
details for an object store, and you've committed that to some form of SCM: 
stop, revert, change the login credentials.

The JUnit tests in swift are designed to pick up details in auth-keys.xml. Not 
only can you have different ones for each machine, if you have an absolute XML 
include reference you can pick up hard values. And you can then set up a 
separate s3 bucket for each machine.

I would recommend you isolate with a separate bucket per host, not path 
underneath. things like test cleanup may interfere ... it's designed to purge 
everything to ensure you don't run up storage bills.


{code}
  http://www.w3.org/2001/XInclude;
href="file:///users/stevel/.ssh/auth-keys.xml" />
{code}


> Hadoop swift driver unit test should use unique directory for each run
> --
>
> Key: HADOOP-13021
> URL: https://issues.apache.org/jira/browse/HADOOP-13021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.7.2
>Reporter: Chen He
>Assignee: Chen He
>  Labels: unit-test
>
> Since all "unit test" in swift package are actually functionality test, it 
> requires server's information in the core-site.xml file. However, multiple 
> unit test runs on difference machines using the same core-site.xml file will 
> result in some unit tests failure. For example:
> In TestSwiftFileSystemBasicOps.java
> public void testMkDir() throws Throwable {
> Path path = new Path("/test/MkDir");
> fs.mkdirs(path);
> //success then -so try a recursive operation
> fs.delete(path, true);
>   }
> It is possible that machine A and B are running "mvn clean install" using 
> same core-site.xml file. However, machine A run testMkDir() first and delete 
> the dir, but machine B just tried to run fs.delete(path,true). It will report 
> failure. This is just an example. There are many similar cases in the unit 
> test sets. I would propose we use a unique dir for each unit test run instead 
> of using "Path path = new Path("/test/MkDir")" for all concurrent runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13020) dfs -ls s3a root should not return error when bucket is empty

2016-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238970#comment-15238970
 ] 

Steve Loughran commented on HADOOP-13020:
-

isn't this HADOOP-11918

> dfs -ls s3a root should not return error when bucket is empty
> -
>
> Key: HADOOP-13020
> URL: https://issues.apache.org/jira/browse/HADOOP-13020
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> Do not expect {{hdfs dfs -ls}} s3a root to return error "No such file or 
> directory" when the s3 bucket is empty. Expect no error and empty output, 
> just like listing an empty directory.
> {code}
> $ hdfs dfs -ls s3a://jz-hdfs1/
> Found 1 items
> drwxrwxrwx   -  0 1969-12-31 16:00 s3a://jz-hdfs1/tmp
> $ hdfs dfs -rmdir s3a://jz-hdfs1/tmp
> $ hdfs dfs -ls s3a://jz-hdfs1/
> ls: `s3a://jz-hdfs1/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238956#comment-15238956
 ] 

Hadoop QA commented on HADOOP-10768:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 15s 
{color} | {color:red} root: patch generated 52 new + 662 unchanged - 5 fixed = 
714 total (was 667) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238943#comment-15238943
 ] 

Hadoop QA commented on HADOOP-12943:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s 
{color} | {color:red} root: patch generated 9 new + 32 unchanged - 0 fixed = 41 
total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 57s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 11s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 224m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| 

[jira] [Commented] (HADOOP-13019) Implement ErasureCodec for HitchHiker XOR coding

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238840#comment-15238840
 ] 

Hadoop QA commented on HADOOP-13019:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 35s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798460/HADOOP-13019.02.patch 
|
| JIRA Issue | HADOOP-13019 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b15bd86c1c76 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 35f0770 |
| 

[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Fix Version/s: 2.8.0

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12943.001.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command

2016-04-13 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-12943:
-
Component/s: tools
 scripts

> Add -w -r options in dfs -test command
> --
>
> Key: HADOOP-12943
> URL: https://issues.apache.org/jira/browse/HADOOP-12943
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, scripts, tools
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-12943.001.patch
>
>
> Currently the dfs -test command only supports 
>   -d, -e, -f, -s, -z
> options. It would be helpful if we add 
>   -w, -r 
> to verify permission of r/w before actual read or write. This will help 
> script programming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-13 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238767#comment-15238767
 ] 

Jiajia Li commented on HADOOP-12911:


Thanks for Andrew and Kai's advises, I will survery how the MiniKDC to be used 
in downstream projects(such as: HBase).

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-13 Thread Bolke de Bruin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238766#comment-15238766
 ] 

Bolke de Bruin commented on HADOOP-12751:
-

[~drankye] yes we did. We did: we found some issues in some components e.g. 
hive but they have been fixed by submitting patches (hive was employing its own 
mechanism). Apache Ranger has some UI issues, but they are non blocking. 
Zookeeper uses its copy of hadoop-auth, might need to be synced but we haven't 
seen any issues because of it.

 

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13019) Implement ErasureCodec for HitchHiker XOR coding

2016-04-13 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13019:

Attachment: HADOOP-13019.02.patch

> Implement ErasureCodec for HitchHiker XOR coding
> 
>
> Key: HADOOP-13019
> URL: https://issues.apache.org/jira/browse/HADOOP-13019
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13019.01.patch, HADOOP-13019.02.patch
>
>
> Implement a missing {{ErasureCodec}} that uses {{HHXORErasureEncoder}} and 
> {{HHXORErasureDecoder}} in order to align the interface of each coding 
> algorithms



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13021) Hadoop swift driver unit test should use unique directory each run

2016-04-13 Thread Chen He (JIRA)
Chen He created HADOOP-13021:


 Summary: Hadoop swift driver unit test should use unique directory 
each run
 Key: HADOOP-13021
 URL: https://issues.apache.org/jira/browse/HADOOP-13021
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.7.2
Reporter: Chen He
Assignee: Chen He


Since all "unit test" in swift package are actually functionality test, it 
requires server's information in the core-site.xml file. However, multiple unit 
test runs on difference machines using the same core-site.xml file will result 
in some unit tests failure. For example:
In TestSwiftFileSystemBasicOps.java
public void testMkDir() throws Throwable {
Path path = new Path("/test/MkDir");
fs.mkdirs(path);
//success then -so try a recursive operation
fs.delete(path, true);
  }

It is possible that machine A and B are running "mvn clean install" using same 
core-site.xml file. However, machine A run testMkDir() first and delete the 
dir, but machine B just tried to run fs.delete(path,true). It will report 
failure. This is just an example. There are many similar cases in the unit test 
sets. I would propose we use a unique dir for each unit test run instead of 
using "Path path = new Path("/test/MkDir")" for all concurrent runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13021) Hadoop swift driver unit test should use unique directory for each run

2016-04-13 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-13021:
-
Labels: unit-test  (was: )

> Hadoop swift driver unit test should use unique directory for each run
> --
>
> Key: HADOOP-13021
> URL: https://issues.apache.org/jira/browse/HADOOP-13021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.7.2
>Reporter: Chen He
>Assignee: Chen He
>  Labels: unit-test
>
> Since all "unit test" in swift package are actually functionality test, it 
> requires server's information in the core-site.xml file. However, multiple 
> unit test runs on difference machines using the same core-site.xml file will 
> result in some unit tests failure. For example:
> In TestSwiftFileSystemBasicOps.java
> public void testMkDir() throws Throwable {
> Path path = new Path("/test/MkDir");
> fs.mkdirs(path);
> //success then -so try a recursive operation
> fs.delete(path, true);
>   }
> It is possible that machine A and B are running "mvn clean install" using 
> same core-site.xml file. However, machine A run testMkDir() first and delete 
> the dir, but machine B just tried to run fs.delete(path,true). It will report 
> failure. This is just an example. There are many similar cases in the unit 
> test sets. I would propose we use a unique dir for each unit test run instead 
> of using "Path path = new Path("/test/MkDir")" for all concurrent runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13021) Hadoop swift driver unit test should use unique directory for each run

2016-04-13 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-13021:
-
Summary: Hadoop swift driver unit test should use unique directory for each 
run  (was: Hadoop swift driver unit test should use unique directory each run)

> Hadoop swift driver unit test should use unique directory for each run
> --
>
> Key: HADOOP-13021
> URL: https://issues.apache.org/jira/browse/HADOOP-13021
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.7.2
>Reporter: Chen He
>Assignee: Chen He
>  Labels: unit-test
>
> Since all "unit test" in swift package are actually functionality test, it 
> requires server's information in the core-site.xml file. However, multiple 
> unit test runs on difference machines using the same core-site.xml file will 
> result in some unit tests failure. For example:
> In TestSwiftFileSystemBasicOps.java
> public void testMkDir() throws Throwable {
> Path path = new Path("/test/MkDir");
> fs.mkdirs(path);
> //success then -so try a recursive operation
> fs.delete(path, true);
>   }
> It is possible that machine A and B are running "mvn clean install" using 
> same core-site.xml file. However, machine A run testMkDir() first and delete 
> the dir, but machine B just tried to run fs.delete(path,true). It will report 
> failure. This is just an example. There are many similar cases in the unit 
> test sets. I would propose we use a unique dir for each unit test run instead 
> of using "Path path = new Path("/test/MkDir")" for all concurrent runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238685#comment-15238685
 ] 

Hadoop QA commented on HADOOP-12875:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: hadoop-tools 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: hadoop-tools 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 51s 
{color} | {color:green} hadoop-tools in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 25s 
{color} | {color:green} hadoop-tools in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 105m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798437/Hadoop-12875-003.patch
 |
| JIRA Issue | 

[jira] [Commented] (HADOOP-11892) CryptoCodec#getInstance always returns a new instance of CryptoCodec. This could be expensive

2016-04-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238684#comment-15238684
 ] 

Xiao Chen commented on HADOOP-11892:


Thanks for the comments Wei-Chiu and Andrew.
bq. And you can also use lsof /dev/urandom to list open fds.
Yes, we can get the process opening /dev/urandom using {{lsof}}. But for 
details on how those fds are kept open, I think we'd need to check jstack as 
well to help diagnose. 

bq. IMO, fixing this isn't really useful.
Fair point. As Andrew pointed out, /dev/urandom doesn't block. But we did see 
the process has thousands of /dev/urandom fd open, which indicates something 
else. I feel this jira could improve the situation in some degree.
OTOH, so far the occurrence of this issue are all without HADOOP-11891, so it's 
hard to tell whether there are other issues left (e.g. some rare case leaking 
etc.). If we don't feel much value of this jira, maybe we could revisit this if 
we see any reproduction with the existence of HADOOP-11891, and analyze the 
jstack then.

> CryptoCodec#getInstance always returns a new instance of CryptoCodec. This 
> could be expensive
> -
>
> Key: HADOOP-11892
> URL: https://issues.apache.org/jira/browse/HADOOP-11892
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> {{CryptoCodec#getInsatnce}} should be able to return possibly cached 
> instances of the CryptoCodec implementation as instantiating a new instance 
> every time could be expensive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)