[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207906#comment-15207906
 ] 

Hadoop QA commented on HADOOP-12909:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 51s {color} 
| {color:red} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 1 new + 737 
unchanged - 1 fixed = 738 total (was 738) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 25s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 733 
unchanged - 1 fixed = 734 total (was 734) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 5 
new + 85 unchanged - 1 fixed = 90 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 19 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 32s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK 

[jira] [Updated] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12952:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~steve_l] for the contribution!

> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: HADOOP-12952-001.patch
>
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2016-03-22 Thread Gopal V (JIRA)
Gopal V created HADOOP-12956:


 Summary: Inevitable Log4j2 migration via slf4j
 Key: HADOOP-12956
 URL: https://issues.apache.org/jira/browse/HADOOP-12956
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gopal V


{{5 August 2015 --The Apache Logging Services™ Project Management Committee 
(PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
of life (EOL) and is no longer officially supported.}}

https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces

A whole framework log4j2 upgrade has to be synchronized, partly for improved 
performance brought about by log4j2.

https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-11418:
-
Assignee: fang fang chen  (was: Alan Liu (Yuan Bo Liu))

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: fang fang chen
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.006.patch, HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11418) Property "io.compression.codec.lzo.class" does not work with other value besides default

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-11418:
-
Assignee: Alan Liu (Yuan Bo Liu)  (was: fang fang chen)

> Property "io.compression.codec.lzo.class" does not work with other value 
> besides default
> 
>
> Key: HADOOP-11418
> URL: https://issues.apache.org/jira/browse/HADOOP-11418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: fang fang chen
>Assignee: Alan Liu (Yuan Bo Liu)
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-11418-004.patch, HADOOP-11418-1.patch, 
> HADOOP-11418-2.patch, HADOOP-11418-3.patch, HADOOP-11418.005.patch, 
> HADOOP-11418.006.patch, HADOOP-11418.patch
>
>
> From following code, seems "io.compression.codec.lzo.class" does not work for 
> other codec besides default. Hadoop will always treat it as defaultClazz. I 
> think it is a bug. Please let me know if this is a work as design thing. 
> Thanks
>  77   private static final String defaultClazz =
>  78   "org.apache.hadoop.io.compress.LzoCodec";
>  82   public synchronized boolean isSupported() {
>  83 if (!checked) {
>  84   checked = true;
>  85   String extClazz =
>  86   (conf.get(CONF_LZO_CLASS) == null ? System
>  87   .getProperty(CONF_LZO_CLASS) : null);
>  88   String clazz = (extClazz != null) ? extClazz : defaultClazz;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207741#comment-15207741
 ] 

Hadoop QA commented on HADOOP-8145:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 48s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794861/HADOOP-8145.002.patch 
|
| JIRA Issue | HADOOP-8145 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 569d45e8cec2 

[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207704#comment-15207704
 ] 

Kai Zheng commented on HADOOP-11540:


Opened HADOOP-12955 for the issue.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, HADOOP-11540-v5.patch, 
> HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12955) checknative failed when checking ISA-L library

2016-03-22 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12955:
--

 Summary: checknative failed when checking ISA-L library
 Key: HADOOP-12955
 URL: https://issues.apache.org/jira/browse/HADOOP-12955
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kai Zheng
Assignee: Kai Zheng


Ref. the comment 
[here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
 

When run hadoop checknative, it also failed. Got something like below from log:
{noformat}
Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
j  
org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
v  ~StubRoutines::call_stub
V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x1056
V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
C  [libjli.so+0x7bcc]  JavaMain+0x80c
C  [libpthread.so.0+0x8182]  start_thread+0xc2

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  
org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
v  ~StubRoutines::call_stub
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207698#comment-15207698
 ] 

Kai Zheng commented on HADOOP-11540:


Oops, the issue looks like being introduced by HADOOP-11996. Will fire a new 
issue to address it.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, HADOOP-11540-v5.patch, 
> HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-03-22 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: hadoop-ldap-auth-v6.patch

[~benoyantony]

Thanks a lot for the feedback. Please find the updated patch attached 
(hadoop-ldap-auth-v6.patch).

I have addressed all the review comments. Also I have added unit tests to 
verify integration between Kerberos authenticator and the new handler impl. I 
am not sure if we need  an authenticator impl for this new handler. But if 
deemed necessary, it can be added later on.

Please take a look and let me have your feedback.
 

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207635#comment-15207635
 ] 

Kai Zheng commented on HADOOP-12924:


[~lirui] and [~zhz], how do you think doing the above change here to support 
multiple raw erasure coders or codecs, while addressing the question/concern of 
Zhe? This may further close some gaps left in the around.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207619#comment-15207619
 ] 

Kai Zheng commented on HADOOP-11540:


I guess I can fix the checking styles even though I don't like some of them. :(

TestNativeLibraryChecker failure isn't related. When running it locally, it 
outputs as follows.
[~cmccabe] do you have any clue for this? I can fix it separately if necessary.
{noformat}
NativeLibraryChecker [-a|-h]
NativeLibraryChecker [-a|-h]
  -a  use -a to check all libraries are available
  -a  use -a to check all libraries are available
  by default just check hadoop library (and
  winutils.exe on Windows OS) is available
  by default just check hadoop library (and
  exit with error code 1 if check failed
  winutils.exe on Windows OS) is available
  -h  print this message
  exit with error code 1 if check failed

16/03/24 08:19:15 INFO util.ExitUtil: Exiting with status 1
  -h  print this message

16/03/24 08:19:15 FATAL util.ExitUtil: Terminate called
org.apache.hadoop.util.ExitUtil$ExitException: ExitException
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:192)
at 
org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:55)
at 
org.apache.hadoop.util.TestNativeLibraryChecker.expectExit(TestNativeLibraryChecker.java:32)
at 
org.apache.hadoop.util.TestNativeLibraryChecker.testNativeLibraryChecker(TestNativeLibraryChecker.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
NativeLibraryChecker [-a|-h]
  -a  use -a to check all libraries are available
  by default just check hadoop library (and
  winutils.exe on Windows OS) is available
  exit with error code 1 if check failed
  -h  print this message

16/03/24 08:19:15 INFO util.ExitUtil: Exiting with status 1
16/03/24 08:19:15 FATAL util.ExitUtil: Terminate called
org.apache.hadoop.util.ExitUtil$ExitException: ExitException
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:192)
at 
org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:55)
at 
org.apache.hadoop.util.TestNativeLibraryChecker.expectExit(TestNativeLibraryChecker.java:32)
at 
org.apache.hadoop.util.TestNativeLibraryChecker.testNativeLibraryChecker(TestNativeLibraryChecker.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at 

[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-22 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207616#comment-15207616
 ] 

Xiaobing Zhou commented on HADOOP-12909:


[~wheat9] looks like gRPC is still in early stage, we can see the limited and 
predicable changes in current RPC to meet async case.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207612#comment-15207612
 ] 

Larry McCay commented on HADOOP-12942:
--

It's actually a "password in a file that is referenced from config" and is a 
specific pattern in hadoop.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-22 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207608#comment-15207608
 ] 

Xiaobing Zhou commented on HADOOP-12909:


I posted patch V004 that fixed test/code issues [~szetszwo] mentioned, thanks 
for this review.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207610#comment-15207610
 ] 

Larry McCay commented on HADOOP-12942:
--

Agreed. :)


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-22 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12909:
---
Attachment: HADOOP-12909-HDFS-9924.004.patch

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207603#comment-15207603
 ] 

Mike Yoder commented on HADOOP-12942:
-

Oh goodness. When you expand it to the general paradigm of "a password in a 
file..." yeah, I do recognize most of those. I was just thinking of the concept 
as applied to the providers in the discussion so far. Let me start without the 
pwdfile command at all. On some level, an "echo asdf > file && chmod 400 file" 
isn't that hard. Or at least not implement in the first pass - it's a separate 
problem from the rest.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207552#comment-15207552
 ] 

Robert Kanter commented on HADOOP-12954:


Ya, let's leave them both open.  This is a Common change and the MR AM would be 
in MapReduce, so we'd need to JIRAs anyway.  We can add the 
{{setConfiguration}} method in this JIRA and use MAPREDUCE-6565 to call it 
(similar to what OOZIE-2490 is for).

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-12954.001.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207546#comment-15207546
 ] 

Larry McCay commented on HADOOP-12942:
--

There are at least the following additional password files throughout the 
ecosystem - I'm sure that there are probably more:

Hadoop:
hadoop-http-auth-signature-secret
hadoop.security.group.mapping.ldap.ssl.keystore.password.file
hadoop.security.group.mapping.ldap.bind.password.file

HBase JMX Remote:
HBASE_JMX_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"

HBase Web UIs
TLS/SSL Server Keystore File Password   - Password for the server keystore file 
used for encrypted web UIs.
TLS/SSL Server Keystore Key Password- Password that protects the private 
key contained in the server keystore used for encrypted web UIs.

HBase REST Server
HBase REST Server TLS/SSL Server JKS Keystore File Password - The password 
for the HBase REST Server JKS keystore file.
HBase REST Server TLS/SSL Server JKS Keystore Key Password  - The password 
that protects the private key contained in the JKS keystore used when HBase 
REST Server is acting as a TLS/SSL server.

HBase Thrift Server
HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore File Password  - The 
password for the HBase Thrift Server JKS keystore file.
HBase Thrift Server over HTTP TLS/SSL Server JKS Keystore Key Password  - The 
password that protects the private key contained in the JKS keystore used when 
HBase Thrift Server over HTTP is acting as a TLS/SSL server.

Oozie SSL/TLS
Oozie TLS/SSL Server JKS Keystore File Password  - Password for the keystore.

I'd rather not add this work to the Key and Credential provider commands.
The keystore providers are both just consumers of the same password file 
pattern found else where through out hadoop.

I believe that it is generally part of the administrative platforms like Ambari 
and Cloudera Manager but if you would like a CLI management tool then I think 
that may add some value. Like I described it could take care of the permissions 
settings, etc. Which would all be separate manual steps from the command line.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for 

[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207529#comment-15207529
 ] 

Chris Nauroth commented on HADOOP-12954:


Thanks, Robert.  I think this makes sense.  I guess the new 
{{SecurityUtil#setConfiguration}} call would have to be wired into the MR AM.  
If you weren't planning on doing that within the scope of this issue, then 
maybe it makes more sense to keep both open instead of resolving one as 
duplicate of another.

bq. Though for MAPREDUCE-6565, why can't you have a core-site.xml added via 
mapreduce.application.classpath?

You can, but when I filed MAPREDUCE-6565, I was asserting that you shouldn't 
have to do this.  The behavior is different from a lot of other things in that 
the correct value of use_ip doesn't propagate down via the submitted job.xml.

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-12954.001.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12954:
---
Status: Patch Available  (was: Open)

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-12954.001.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12954:
---
Attachment: HADOOP-12954.001.patch

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-12954.001.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207516#comment-15207516
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

checkstyle warning was removed. 
Test failures are unrelated, and I really don't know where these asf license 
warning come from.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch, HADOOP-12862.007.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207502#comment-15207502
 ] 

Hadoop QA commented on HADOOP-12950:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
26s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 4 
new + 2 unchanged - 5 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 22s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794841/HADOOP-12950.00.patch 
|
| JIRA Issue | HADOOP-12950 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 386ca7e49f35 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207503#comment-15207503
 ] 

Robert Kanter commented on HADOOP-12954:


The end problems were different, but it's the same root cause: 
{{hadoop.security.token.service.use_ip}} is initialized from the classpath and 
core-site.xml isn't there.

I think my proposal should work to solve both problems.  The Oozie server can 
call {{SecurityUtil.setConfiguration(conf)}} to pass the {{Configuration}} it 
loaded.  The MR AM could do the same.  Though for MAPREDUCE-6565, why can't you 
have a core-site.xml added via {{mapreduce.application.classpath}}?

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207496#comment-15207496
 ] 

Chris Nauroth commented on HADOOP-12954:


Hi [~rkanter].  Is this a duplicate of MAPREDUCE-6565?

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-03-22 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-12954:
--

 Summary: Add a way to change hadoop.security.token.service.use_ip
 Key: HADOOP-12954
 URL: https://issues.apache.org/jira/browse/HADOOP-12954
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter


Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
{code:java}
  static {
Configuration conf = new Configuration();
boolean useIp = conf.getBoolean(
CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
setTokenServiceUseIp(useIp);
  }
{code}

This is a problem for clients, such as Oozie, who don't add *-site.xml files to 
their classpath.  Oozie normally creates a {{JobClient}} and passes a 
{{Configuration}} to it with the proper configs we need.  However, because 
{{hadoop.security.token.service.use_ip}} is specified in a static block like 
this, and there's no API to change it, Oozie has no way to set it to the 
non-default value.

I propose we add a {{setConfiguration}} method which takes a {{Configuration}} 
and rereads {{hadoop.security.token.service.use_ip}}.  There's a few other 
properties that are also loaded statically on startup that can be reloaded here 
as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207464#comment-15207464
 ] 

Hadoop QA commented on HADOOP-12862:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
57s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
58s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
14s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 34s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestName |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Failed junit tests | 

[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207450#comment-15207450
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 523 unchanged - 45 fixed = 523 total (was 568) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 59s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794848/HADOOP-12916.04.patch 
|
| JIRA Issue | HADOOP-12916 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5f5716499f03 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Attachment: HADOOP-8145.002.patch

Rev02: fixed dependency conflict.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch, HADOOP-8145.002.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207394#comment-15207394
 ] 

Andrew Wang commented on HADOOP-10965:
--

I'd prefer the format that I used above, the parens are a bit ugly, and having 
the fq path there crowds out the provided path. Also it seems like this format 
lost the quotes around f1?

> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-10965.001.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12949) Add metrics and HTrace to the s3a connector

2016-03-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207387#comment-15207387
 ] 

Colin Patrick McCabe commented on HADOOP-12949:
---

Thanks, [~steve_l].  Providing a rolling window of metrics seems useful.  It 
might be worth doing that in a follow-on JIRA, since it seems like it might 
involve some major code reorganization.  I also wonder whether these metrics 
should be added to HDFS as well (how datastore-neutral are they?)

> Add metrics and HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-22 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207381#comment-15207381
 ] 

John Zhuge commented on HADOOP-10965:
-

Thanks [~andrew.wang], it sounds good. Is it ok if I change the output a little 
to this?
{code}
$ hdfs dfs -put f1 f1
put: f1 (hdfs://namenode:port/user/jack/f1): No such file or directory
{code}

And only use this format when {{path != fqPath}}.


> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-10965.001.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207367#comment-15207367
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 523 unchanged - 45 fixed = 523 total (was 568) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 2s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 6s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.net.TestClusterTopology |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794822/HADOOP-12916.03.patch 
|
| JIRA Issue | HADOOP-12916 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2016-03-22 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207316#comment-15207316
 ] 

Sean Busbey commented on HADOOP-12559:
--

thanks!

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2016-03-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207310#comment-15207310
 ] 

Zhe Zhang commented on HADOOP-12559:


Just backported HADOOP-12682 to branch-2.7 and branch-2.6 as well.

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12682) Fix TestKMS#testKMSRestart* failure

2016-03-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207309#comment-15207309
 ] 

Zhe Zhang commented on HADOOP-12682:


I just backported the patch to branch-2.7 and branch-2.6 per discussion under 
HADOOP-12559

> Fix TestKMS#testKMSRestart* failure
> ---
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12682.001.patch, HADOOP-12682.002.patch, 
> HADOOP-12682.003.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12682) Fix TestKMS#testKMSRestart* failure

2016-03-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12682:
---
Fix Version/s: 2.6.5
   2.7.3

> Fix TestKMS#testKMSRestart* failure
> ---
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12682.001.patch, HADOOP-12682.002.patch, 
> HADOOP-12682.003.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12947) Update documentation Hadoop Groups Mapping to add static group mapping, negative cache

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207300#comment-15207300
 ] 

Hadoop QA commented on HADOOP-12947:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794827/HADOOP-12947.001.patch
 |
| JIRA Issue | HADOOP-12947 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 46870a48e5ce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e7ed05e |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8897/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Update documentation Hadoop Groups Mapping to add static group mapping, 
> negative cache
> --
>
> Key: HADOOP-12947
> URL: https://issues.apache.org/jira/browse/HADOOP-12947
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12947.001.patch
>
>
> After _Hadoop Group Mapping_ was written, I subsequently found a number of 
> other things that should be added/updated: 
> # static group mapping, statically map users to group names (HADOOP-10142)
> # negative cache, to avoid spamming NameNode with invalid user names 
> (HADOOP-10755)
> # update query pattern for LDAP groups mapping if posix semantics is 
> supported. (HADOOP-9477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: (was: HADOOP-12916.04.patch)

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.04.patch

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12916:
-
Hadoop Flags: Reviewed

+1 the new patch looks good.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207286#comment-15207286
 ] 

Andrew Wang commented on HADOOP-10965:
--

So the issue is not homedir specific, it relates to the CWD (current working 
directory), which by default happens to be the homedir.

As I suggested above, why not just add the fully qualified path to the error 
message? The path could be missing on either the local FS or in HDFS, and 
showing the fully qualified path would address both cases without ambiguity.

> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-10965.001.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.04.patch

Thanks [~szetszwo]! Attach patch 04 to remove the redundant synchronization 
when accessing atomic arrays.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Attachment: HADOOP-12950.00.patch

Attach a patch that allow shutdown hook to be registered with a timeout. For 
shutdown hook registered without timeout like all existing ones, they will get 
a maximum of 10s each before JVM termination. 

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Status: Patch Available  (was: Open)

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207217#comment-15207217
 ] 

Wei-Chiu Chuang commented on HADOOP-8145:
-

The test failures are related to conflicting dependencies. Will post a new 
patch soon to fix it.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207216#comment-15207216
 ] 

Hadoop QA commented on HADOOP-12953:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 13s {color} 
| {color:red} HADOOP-12953 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794826/HADOOP-12953.001.patch
 |
| JIRA Issue | HADOOP-12953 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8895/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
> Attachments: HADOOP-12953.001.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Attachment: HADOOP-12862.007.patch

Attached rev07 to fix checkstyle warning.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch, HADOOP-12862.007.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207177#comment-15207177
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12916:
--

The new patch looks good.  One last comment:
- Since averageResponseTimeInLastWindow and callCountInLastWindow are atomic 
arrays, we can remove synchronized (this) when accessing them.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12947) Update documentation Hadoop Groups Mapping to add static group mapping, negative cache

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12947:
-
Status: Patch Available  (was: Open)

> Update documentation Hadoop Groups Mapping to add static group mapping, 
> negative cache
> --
>
> Key: HADOOP-12947
> URL: https://issues.apache.org/jira/browse/HADOOP-12947
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12947.001.patch
>
>
> After _Hadoop Group Mapping_ was written, I subsequently found a number of 
> other things that should be added/updated: 
> # static group mapping, statically map users to group names (HADOOP-10142)
> # negative cache, to avoid spamming NameNode with invalid user names 
> (HADOOP-10755)
> # update query pattern for LDAP groups mapping if posix semantics is 
> supported. (HADOOP-9477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12947) Update documentation Hadoop Groups Mapping to add static group mapping, negative cache

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12947:
-
Attachment: HADOOP-12947.001.patch

Posted a patch, wrote about static group mapping, caching/negative caching and 
posix groups

> Update documentation Hadoop Groups Mapping to add static group mapping, 
> negative cache
> --
>
> Key: HADOOP-12947
> URL: https://issues.apache.org/jira/browse/HADOOP-12947
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12947.001.patch
>
>
> After _Hadoop Group Mapping_ was written, I subsequently found a number of 
> other things that should be added/updated: 
> # static group mapping, statically map users to group names (HADOOP-10142)
> # negative cache, to avoid spamming NameNode with invalid user names 
> (HADOOP-10755)
> # update query pattern for LDAP groups mapping if posix semantics is 
> supported. (HADOOP-9477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uday Kale updated HADOOP-12953:
---
Status: Patch Available  (was: Open)

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
> Attachments: HADOOP-12953.001.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uday Kale updated HADOOP-12953:
---
Attachment: HADOOP-12953.001.patch

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
> Attachments: HADOOP-12953.001.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-03-22 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207098#comment-15207098
 ] 

Mike Yoder commented on HADOOP-12942:
-

{quote}
I have heard reluctance from folks in the past for having commands prompt for 
passwords and would certainly break the scriptability of it. We would have to 
add a switch that enabled the prompting for a password - if we were to add it 
to the credential create subcommand.
{quote}

Agreed. Today as you know the credential create command prompts for a password 
but there is an undocumented "-value" argument that can be used.  I'd stick 
with the same scheme where either a prompt or command line argument were 
possible.

{quote}
This same password file is used in lots of scenarios though: KMS, javakeystore 
providers for key provider API, oozie, signing secret providers,e tc. I wonder 
whether a separate command for it would make sense.
{quote}
Conceptually, yes, but aren't config values different?  I'm aware of two:
* alias/AbstractJavaKeyStoreProvider: 
hadoop.security.credstore.java-keystore-provider.password-file
* key/JavaKeyStoreProvider: 
hadoop.security.keystore.java-keystore-provider.password-file

{quote}
Keep in mind that we would need to do a number of things for this.
1. prompt for the password
2. persist it
3. set appropriate permissions on the file
4. somehow determine the filename to use (probably based on the password file 
name configuration) which would need to be provided by the user as well
5. allow for use of the same password file for multiple keystores or scenarios
6. allow for random-ish generated password without prompt
{quote}
I think it's even more complicated. :-) The user could want to use the 
environment variable when the credential is consumed, and so would want to 
provide it to the command but would not want to deal with anything 
file-related. 

Also it's conceivable that the user could have constructed the file themselves; 
although this doesn't seem particularly user friendly. 

So we have scenarios for hadoop credential create|list|etc that look like
# Here is the credstore password from a prompt
# Here is the credstore password on the command line
# The credstore password is already in a file in the "expected" location (set 
up either by hand or via your new pwdfile command).

Making a command to manage the password file makes sense. I think that we 
shouldn't ask the user to give it the property name though: you could modify 
KeyShell and CredentialShell to have a new subcommand of 'pwdfile', thusly:
* hadoop credential pwdfile \[args\]
* hadoop key pwdfile \[args\]

And they could share an implementation. This way the user does not have to 
remember "hadoop.security.credstore.java-keystore-provider.password-file" or 
the like. This also means that the provider selected needs a new interface to 
create said file, if applicable.

I like the auto-generate-password option for the file. I think the default 
would be to still prompt for the password, though.  So yeah, adding a pwdfile 
command seems like a good idea.

The thing about the existing design that I'm going back and forth on is that 
the CredentialShell is high-level, and selects a provider and then simply 
passes information to the provider. The password is implied and not passed 
directly, so the CredentialShell has no notion of whether or not the underlying 
provider actually has a password or not.

So, for example, it would be daft of CredentialShell to accept a password on 
the command line if one is provided in a file, and it would also be even more 
daft if no password was specifed on the command line and the password wasn't in 
the password file either. Furthermore it would be silly to accept a password 
when the underlying provider does not need a password at all for proper 
operation (example: the UserProvider). There has to be some amount of 
communication between the CredentialShell and the provider in order to get the 
"is a password required" and "where precisely is the password" cases correct.  

To make this even more interesting, in the various providers with a key store, 
the keyStore is either created or opened in the constructor, requiring that all 
the information be presented up front - without scope for the back and forth of 
"do you need a password and where" from the provider.

So... one way to deal with this is to move the keyStore.load() call out of the 
constructor and defer it until the first get/set/delete credential entry call. 
Then expose interfaces along the lines of "does this provider already have the 
password somehow?" and "set the password directly". We'd have to add default 
behavior in CredentialProvider (and KeyProvider) and then implement in the ones 
that matter.

The downside to this approach is that we move around a few error conditions. 
However everything can throw an IOException, so maybe this isn't a big deal. 
Seem 

[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uday Kale updated HADOOP-12953:
---
Description: 
Secure impersonation in HDFS needs users to create proxy users and work with 
those. In libhdfs, the hdfsBuilder accepts a userName but calls FileSytem.get() 
or FileSystem.newInstance() with the user name to connect as. But, both these 
interfaces use getBestUGI() to get the UGI for the given user. This is not 
necessarily true for all services whose end-users would not access HDFS 
directly, but go via the service to first get authenticated with LDAP, then the 
service owner can impersonate the end-user to eventually provide the underlying 
data.

For such services that authenticate end-users via LDAP, the end users are not 
authenticated by Kerberos, so their authentication details wont be in the 
Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
either. 

Hence the need for the new API for libhdfs to get the FileSystem object as a 
proxy user using the 'secure impersonation' recommendations. This approach is  
secure since HDFS authenticates the service owner and then validates the right 
for the service owner to impersonate the given user as allowed by 
hadoop.proxyusers.* parameters of HDFS config.


  was:
Secure impersonation in HDFS needs users to create proxy users and work with 
those. In libhdfs, the hdfsBuilder accepts a userName but calls FileSytem.get() 
or FileSystem.newInstance() with the user name to connect as. But, both these 
interfaces use getBestUGI() to get the UGI for the given user. For services in 
Hadoop that authenticate end-users via LDAP, the end users are not 
authenticated by Kerberos, so their authentication details wont be in the 
Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
either. 

Hence the need for the new API for libhdfs to get the FileSystem object as a 
proxy user using the 'secure impersonation' recommendations.


> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-22 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.03.patch

Attached a new patch that uses AtomLongArray/AtomicDoubleArray for previous 
window response time/count. Also include some minor clean up for check style 
issue.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uday Kale updated HADOOP-12953:
---
Description: 
Secure impersonation in HDFS needs users to create proxy users and work with 
those. In libhdfs, the hdfsBuilder accepts a userName but calls FileSytem.get() 
or FileSystem.newInstance() with the user name to connect as. But, both these 
interfaces use getBestUGI() to get the UGI for the given user. For services in 
Hadoop that authenticate end-users via LDAP, the end users are not 
authenticated by Kerberos, so their authentication details wont be in the 
Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
either. 

Hence the need for the new API for libhdfs to get the FileSystem object as a 
proxy user using the 'secure impersonation' recommendations.

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. For services in Hadoop that authenticate end-users via LDAP, the end 
> users are not authenticated by Kerberos, so their authentication details wont 
> be in the Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way 
> to get this either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uday Kale updated HADOOP-12953:
---
Component/s: fs

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Uday Kale
>Assignee: Uday Kale
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2016-03-22 Thread Uday Kale (JIRA)
Uday Kale created HADOOP-12953:
--

 Summary: New API for libhdfs to get FileSystem object as a proxy 
user
 Key: HADOOP-12953
 URL: https://issues.apache.org/jira/browse/HADOOP-12953
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Uday Kale
Assignee: Uday Kale






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206910#comment-15206910
 ] 

Wei-Chiu Chuang commented on HADOOP-12886:
--

Unit test failures are unrelated, and asf licensing warning is unrelated too.

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Attachments: HADOOP-12886.001.patch, HADOOP-12886.002.patch, 
> HADOOP-12886.003.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206892#comment-15206892
 ] 

Hadoop QA commented on HADOOP-12886:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 0s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.net.TestClusterTopology |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794793/HADOOP-12886.003.patch
 |
| JIRA Issue | HADOOP-12886 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d722c9d5b32a 

[jira] [Commented] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206861#comment-15206861
 ] 

Hadoop QA commented on HADOOP-8145:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 8s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 6s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.security.TestUGILoginFromKeytab |
|   | hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | hadoop.security.TestKDiag |
|   | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.security.TestLDAPServer |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Failed junit tests | hadoop.security.TestUGILoginFromKeytab |
|   | 

[jira] [Updated] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12886:
-
Attachment: HADOOP-12886.003.patch

Rev03: fixed the checkstyle warning.

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Attachments: HADOOP-12886.001.patch, HADOOP-12886.002.patch, 
> HADOOP-12886.003.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206754#comment-15206754
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

Would be nice to add test cases, if HADOOP-8145 gets committed before this one.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, 
> HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, 
> HADOOP-12862.006.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Attachment: HADOOP-8145.001.patch

Rev01: a basic test class built using Apache Directory Service test framework.

The test class has two test cases: an anonymous login and an authenticated 
login, using posix nis semantics.

This patch introduces the dependency on ADS test framework, however, ADS 
dependency has already existed before, so it's not really an extra dependency; 
moreover, using this test framework is much cleaner than introducing a full 
standalone apache directory server.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Labels: test  (was: )
Status: Patch Available  (was: Open)

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Component/s: test

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206706#comment-15206706
 ] 

Akira AJISAKA commented on HADOOP-12952:


+1, thanks Steve!

> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-12952-001.patch
>
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-03-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206639#comment-15206639
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

bq. So, is the fix specifying an independent m2 dir?

With all the wrappings that need to go into making that work effectively, yes.  
That at least solves the maven cache problem.

bq. This means we end up downloading the world

Let's not forget that this is for a *release*.  Downloading the world is a 
*good thing*.

BTW, don't forget that HADOOP-12893 still needs to be taken care of as well.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12892.00.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-10965:

Status: In Progress  (was: Patch Available)

[~andrew.wang] and [~yzhangal], the most common and perplexing mistake a new 
HDFS user can make is having forgot to create the home directory. It could even 
take a seasoned user a little while to realize the mistake. How about 
triggering a new {{HomeDirectoryNotFoundException}} when accessing a relative 
path and home directory having not been created? The exception should print the 
home directory path for 2 reasons: 1) Some user may not know the format 
'/user/'; 2) The admin might have changed the home directory path 
template (very unlikely though).

For example:
{code}
$ fs -put f1 f1
put: f1: Home directory '/user/jack' not found
{code}

> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-10965.001.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206491#comment-15206491
 ] 

Hadoop QA commented on HADOOP-12952:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12794778/HADOOP-12952-001.patch
 |
| JIRA Issue | HADOOP-12952 |
| Optional Tests |  asflicense  |
| uname | Linux ef0eeaa170b7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e7ed05e |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8892/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-12952-001.patch
>
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12952:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-12952-001.patch
>
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12952:

Attachment: HADOOP-12952-001.patch

Patch 001. Tested by pasting into a command line

> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-12952-001.patch
>
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12952:

Priority: Trivial  (was: Major)

> /BUILDING example of zero-docs dist should skip javadocs
> 
>
> Key: HADOOP-12952
> URL: https://issues.apache.org/jira/browse/HADOOP-12952
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> The examples for building distributions include how to create one without any 
> documentation. But it includes the javadoc stage in the build, which is very 
> slow.
> Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out 
> the parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12952) /BUILDING example of zero-docs dist should skip javadocs

2016-03-22 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12952:
---

 Summary: /BUILDING example of zero-docs dist should skip javadocs
 Key: HADOOP-12952
 URL: https://issues.apache.org/jira/browse/HADOOP-12952
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, documentation
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The examples for building distributions include how to create one without any 
documentation. But it includes the javadoc stage in the build, which is very 
slow.

Adding {{-Dmaven.javadoc.skip=true}} skips that phase, and helps round out the 
parameters to a build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-22 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206390#comment-15206390
 ] 

Vishwajeet Dusane commented on HADOOP-12666:



Participants: [~cnauroth], [~fabbri], [~hanm], [~twu], [~chris.douglas], 
[~vishwajeet.dusane], [~shrikant].
 
*Meeting agenda* 
*   Discuss current comments and outstanding issues on HADOOP-12666
*   Approach of sub-classing webhdfs HDFS-9938
 
*Packaging:* 
* Given the three options that were discussed in HDFS-9938, Chris N mentioned 
he was okay with the comments posted on the Jira. The option to maintain the 
current approach seems to be the best way forward in the short term. This is 
based on the understanding that the current approach is temporary and the 
client will evolve to be independent of WebHDFS.  The Jira will be updated with 
this information.
* The initial approach to use WebHDFS was a good starting point,  but the 
reviewers feel that it is good to evolve the ADL client independent of webHDFS.
* With the current approach changes in WebHDFS will impact the ADLS client.
* The recommendation was to publish a long term plan of having a solution 
independent of WebHDFS and plan to target 2.9 for the separate ADL client (Long 
term plan)
* Having such a plan would make the community more comfortable accepting the 
current solution.
 
*Create-Append Semantics:*
* Discussed overall create/append semantics, there was a concern raised that 
this does not ensure single writer semantics. .
* Chris N did mention that this is a deviation from HDFS semantics of enforcing 
single writer semantics, and also stated that this is an approach taken by 
other cloud storage systems as well e.g WASB and s3.
* There are some applications that do require this capability, typically these 
applications start writing to the same path, on recovery (e.g Hbase). 
* File Systems like WASB have made  specific updates to address  the needs of 
certain applications where handling multiple writers was an issue for e.g 
HBASE.  WASB has implemented a specific lease implementation for HBASE
* The ADL Client implementation also implements a similar lease semantic for 
Hbase and this is specifically done for createnonrecursive 
 ** It was clarified that the leaseid was generation by using a guid and 
there was an agreement on this approach
 ** This information will be included in a separate document to be uploaded 
to the Jira (HADOOP-12666)
* Chris N did mention that the general guideline for applications is to have 
each instance write data to its separate file and then commit by renaming it.
 
* All accepted comments have been included in the latest patch 
* Buffer Manager Race condition - has been fixed in the latest patch.
 
*Contract test cases for HDFS do not implement ACL related test cases, since 
none of the file system extensions support  them*
** Would need to create new contract tests for ACLs.
 
* Overall across reviewers on the call there was no further objections to the 
core patches, reviewers plan to complete one more review of the updated patches.
* HADOOP-12875 has been updated with a patch which includes an ability to run 
lives tests using contract tests, new test cases have been added:
 
*Followup Action items*
* Share/upload document that covers 
** information on read semantics, read ahead buffering, Create/Append semantics 
** Lease id generation to be included in the document 
* Share an overall plan on the roadmap for the ADL client - essentially what is 
the plan for removing the dependency on webhdfs (a  "+1" on the Jira will be 
contingent on publishing this plan).Next step is for reviewers to complete the 
review of the new patch (Aaron to help with Cloudera reviewers)
* Produce a page for alternative file systems
** Documents the differences to HDFS ; Example: HADOOP-12635
*  Attach Detailed documentation on file status cache (HADOOP-12876)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such 

[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-22 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Attachment: Create_Read_Hadoop_Adl_Store_Semantics.pdf

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2016-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206191#comment-15206191
 ] 

Hadoop QA commented on HADOOP-10219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 19s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 39s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.ipc.Client$Connection.connectingThread; locked 57% of time  
Unsynchronized access at Client.java:57% of time  Unsynchronized access at 
Client.java:[line 1183] |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || 

[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206172#comment-15206172
 ] 

Wei-Chiu Chuang commented on HADOOP-12948:
--

Thanks [~steve_l] for the comment. Yes looks like Kerby is the way to go. I 
wasn't paying attention to the latest community trend. :) HADOOP-12911 is the 
related jira to replace miniKDC with Kerby.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206121#comment-15206121
 ] 

Steve Loughran commented on HADOOP-12950:
-

This would partially mitigate HADOOP-10219; an example of filesystem shutdown 
hanging.

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206117#comment-15206117
 ] 

Steve Loughran commented on HADOOP-12909:
-

well, there's google's track record in backwards compatibility of protobuf and 
guava libs to consider there...

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12949) Add metrics and HTrace to the s3a connector

2016-03-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12949:

Component/s: fs/s3
Summary: Add metrics and HTrace to the s3a connector  (was: Add HTrace 
to the s3a connector)

> Add metrics and HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12949) Add HTrace to the s3a connector

2016-03-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206107#comment-15206107
 ] 

Steve Loughran commented on HADOOP-12949:
-

There's actually some metrics collection in openstack swift; look under 
{{org.apache.hadoop.fs.swift.util.DurationStats}} ; they log primarily to 
stdout, list min, max, (moving) arithmetic mean, stddev,, by HTTP verb.

# It's pretty low cost to do this; even when hbase sampling is inactive, the 
stats for an FS can be collected.
# The stats showed that rackspace UK throttles delete requests; the more files 
in a directory I was cleaning up on teardown, the longer it took —only now 
exponentially, rather than linearly.
# I didn't hook the code up to the normal hadoop metrics; it's something I'd as 
an option now, because it does become something you need to monitor now we are 
shifting to longer-lived applications.
# I'd add more on causes of operations, specifically: open(), seek(), duration 
of close(), delete() —things where the fact that object stores are generally 
O(files*data) means they don't work as expected ... finding that mismatch of 
expectations matters

More and more object stores are coming in. While s3 is the main one, it'd be 
good to have the core stuff store neutral. The classes from hadoop-openstack 
can be moved if that helps; the per-verb stuff is useful at the deep levels, 
while htrace monitoring can track cost of specific actions.



> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206089#comment-15206089
 ] 

Steve Loughran commented on HADOOP-12948:
-

+! for the move, but note that miniKDC has its own problems; it doesn't really 
like >1 principal

This is something which kerby could help with

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205952#comment-15205952
 ] 

Kai Zheng commented on HADOOP-12924:


Ping [~andrew.wang], in case needed.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205947#comment-15205947
 ] 

Kai Zheng commented on HADOOP-12924:


OK, it won't be bad to have a policy for the legacy coder, since we would have 
one for the dummy coder. So, what's needed for now would be, a mapping between 
{{codec-name}} and {{raw-coder-factory-class}}. In the original design by 
HDFS-7337, it would be done by {{ErasureCodec}}, but for phase 1 and right now, 
for simple we can have a simple enum or hard-coded mapping codes somewhere. For 
the legacy and dummy policies, we just hard-code the raw-coder-factory class; 
for the default RS policy, in addition to the hard-coded default raw coder, we 
allow to configure the class just as current code does using the default coder 
key. Sounds good?

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-22 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205942#comment-15205942
 ] 

Rui Li commented on HADOOP-12924:
-

Actually we also have dummy raw coders, which I think should work with any EC 
policies. Maybe we need a more thorough discussion about the relationship 
between coders and policies.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205856#comment-15205856
 ] 

Zhe Zhang commented on HADOOP-12924:


Thanks for the thoughts Kai. Simplifying logic is a valid concern. However, if 
we have the legacy HDFS-RAID coder in the Hadoop codebase, there's always a 
possibility for some user to use it to encode files. Then by looking at file 
metadata, there's no way to determine whether the encoding was done with legacy 
or new Java coder.

I think we should either add legacy coder as a policy, or take it out from the 
codebase and make it an external took only for migrating legacy HDFS-RAID data 
into HDFS-EC.

Actually, how could our current legacy coder be used to migrate legacy cluster 
data? IIUC HDFS-RAID was developed in Facebook's private branch based on 
upstream version 0.21.0. Is it even possible to run Hadoop 3.0 software on 
blocks created by 0.21.0? I think an easier way to migrate is to use the 
original HDFS-RAID code to read data out and write into HDFS-EC.

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)