[jira] [Commented] (HADOOP-9881) Some questions and possible improvement for MiniKdc/TestMiniKdc

2013-08-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742824#comment-13742824
 ] 

Kai Zheng commented on HADOOP-9881:
---

Thanks for your answers.

Might I repeat the question. As it can generate keytab and then do keytab 
login, is it possible to do the similar work as kinit does to automatically 
generate ticket cache with specified principal? It's useful to test the 
USER_KERBEROS_OPTIONS for UGI.

> Some questions and possible improvement for MiniKdc/TestMiniKdc
> ---
>
> Key: HADOOP-9881
> URL: https://issues.apache.org/jira/browse/HADOOP-9881
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kai Zheng
>
> In org.apache.hadoop.minikdc.TestMiniKdc:
> # In testKeytabGen(), it comments ??principals use \ instead of /??, does 
> this mean the principal must use \ instead of / to use MiniKdc for test 
> cases? If so, should *HADOOP_SECURITY_AUTH_TO_LOCAL* consider this?
> # In testKerberosLogin(), what’s the meant difference between client login 
> and server login? I see isInitiator option is set true or false respectively, 
> but I’m not sure about that.
> # Both in client login and server login, why loginContext.login() gets called 
> again in the end? Perhaps loginContext.logout().
> # It also considers IBM JDK. Ref current UGI implementation, looks like it 
> needs to set KRB5CCNAME system property and useDefaultCcache option 
> specifically.
> It’s good to test login using keytab as current provided facility and test 
> does. Is it also possible to test login via ticket cache or how to 
> automatically generate ticket cache with specified principal without 
> execution of kinit? This is important to cover user Kerberos login (with 
> kinit) if possible using MiniKdc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9881) Some questions and possible improvement for MiniKdc/TestMiniKdc

2013-08-16 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742805#comment-13742805
 ] 

Alejandro Abdelnur commented on HADOOP-9881:


on #1, no you use as usual the "/", you need to use "\\" only if you are using 
the ApacheDS KeytabDecoder to read a keytab, which you normally will not do.

on #2, the test is testing both a client and a server session, this is standard 
Java GSS

on #3, you are correct, it should be login/logout login/logout, thx for 
catching that

on #4, it has not been tested with IBM JDK, code from hadoop-auth was just 
copied here. it may need to be corrected.

Regarding using kinit from the command line, that cannot be done with minikdc 
because the minikdc starts and stops with the test, so you cannot do a kinit 
against a kdc does not yet exist. Though, from Hadoop perspective, if you do a 
LoginContext like the testcases in minikdc, you are not using UGI keytab login, 
but an existing session, that it is equivalent to a command line kinit, you 
just have to make sure you you do a Subject.doAs(subject, PrivilegedAction) to 
test the hadoop code.

I think in this JIRA we can take care of #3 and #4.



> Some questions and possible improvement for MiniKdc/TestMiniKdc
> ---
>
> Key: HADOOP-9881
> URL: https://issues.apache.org/jira/browse/HADOOP-9881
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kai Zheng
>
> In org.apache.hadoop.minikdc.TestMiniKdc:
> # In testKeytabGen(), it comments ??principals use \ instead of /??, does 
> this mean the principal must use \ instead of / to use MiniKdc for test 
> cases? If so, should *HADOOP_SECURITY_AUTH_TO_LOCAL* consider this?
> # In testKerberosLogin(), what’s the meant difference between client login 
> and server login? I see isInitiator option is set true or false respectively, 
> but I’m not sure about that.
> # Both in client login and server login, why loginContext.login() gets called 
> again in the end? Perhaps loginContext.logout().
> # It also considers IBM JDK. Ref current UGI implementation, looks like it 
> needs to set KRB5CCNAME system property and useDefaultCcache option 
> specifically.
> It’s good to test login using keytab as current provided facility and test 
> does. Is it also possible to test login via ticket cache or how to 
> automatically generate ticket cache with specified principal without 
> execution of kinit? This is important to cover user Kerberos login (with 
> kinit) if possible using MiniKdc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-08-16 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742703#comment-13742703
 ] 

Omkar Vinit Joshi commented on HADOOP-9639:
---

something similar.. YARN-1020

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and provide negotiation capabilities

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742686#comment-13742686
 ] 

Hudson commented on HADOOP-9421:


SUCCESS: Integrated in Hadoop-trunk-Commit #4285 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4285/])
HADOOP-9880. SASL changes from HADOOP-9421 breaks Secure HA NN. Contributed by 
Daryn Sharp. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514913)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java


> Convert SASL to use ProtoBuf and provide negotiation capabilities
> -
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742687#comment-13742687
 ] 

Hudson commented on HADOOP-9880:


SUCCESS: Integrated in Hadoop-trunk-Commit #4285 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4285/])
HADOOP-9880. SASL changes from HADOOP-9421 breaks Secure HA NN. Contributed by 
Daryn Sharp. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514913)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java


> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-9880:
--

   Resolution: Fixed
Fix Version/s: 2.1.1-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.1-beta.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742673#comment-13742673
 ] 

Kai Zheng commented on HADOOP-9848:
---

HADOOP-9881 was opened to document some questions and possible improvement for 
MiniKdc/TestMiniKdc. Would you comment on that? Thanks.

> Create a MiniKDC for use with security testing
> --
>
> Key: HADOOP-9848
> URL: https://issues.apache.org/jira/browse/HADOOP-9848
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security, test
>Reporter: Wei Yan
>Assignee: Wei Yan
> Fix For: 2.3.0
>
> Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
> HADOOP-9848.patch, HADOOP-9848.patch
>
>
> Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
> KDC (key distribution center), and allows to create principals and keytabs on 
> the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742666#comment-13742666
 ] 

Sanjay Radia commented on HADOOP-9880:
--

+1, i will commit in a few minutes; thanks daryn.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9881) Some questions and possible improvement for MiniKdc/TestMiniKdc

2013-08-16 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-9881:
-

 Summary: Some questions and possible improvement for 
MiniKdc/TestMiniKdc
 Key: HADOOP-9881
 URL: https://issues.apache.org/jira/browse/HADOOP-9881
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng


In org.apache.hadoop.minikdc.TestMiniKdc:
# In testKeytabGen(), it comments ??principals use \ instead of /??, does this 
mean the principal must use \ instead of / to use MiniKdc for test cases? If 
so, should *HADOOP_SECURITY_AUTH_TO_LOCAL* consider this?
# In testKerberosLogin(), what’s the meant difference between client login and 
server login? I see isInitiator option is set true or false respectively, but 
I’m not sure about that.
# Both in client login and server login, why loginContext.login() gets called 
again in the end? Perhaps loginContext.logout().
# It also considers IBM JDK. Ref current UGI implementation, looks like it 
needs to set KRB5CCNAME system property and useDefaultCcache option 
specifically.

It’s good to test login using keytab as current provided facility and test 
does. Is it also possible to test login via ticket cache or how to 
automatically generate ticket cache with specified principal without execution 
of kinit? This is important to cover user Kerberos login (with kinit) if 
possible using MiniKdc.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742651#comment-13742651
 ] 

Sanjay Radia commented on HADOOP-9880:
--

We also applied and it works.
Yes it is unwrapping the exception - i did not read it carefully last night.
We applied the test from HDFS-3083 and that test passes.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-9880:
---

Target Version/s: 2.1.1-beta  (was: 2.1.0-beta)

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742639#comment-13742639
 ] 

Elizabeth Thomas commented on HADOOP-9745:
--

As given in the http://wiki.apache.org/hadoop/HowToContribute section, I would 
like to run all the unit test cases before constructing my patch. Pls help with 
your insights.

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742636#comment-13742636
 ] 

Elizabeth Thomas commented on HADOOP-9745:
--

[~sureshms] and [~jbonofre] 

I am attempting to build the Hadoop source code and run all the unit tests 
before I get started to work on the trivial hadoop issues.

Could you pls help with pointers on how to get the "mvn test" or "mvn install" 
to run successfully for the source-code checked out from trunk, ignoring these 
errors/failures?

Are these failure scenarios to be ignored? 

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742627#comment-13742627
 ] 

Suresh Srinivas edited comment on HADOOP-9745 at 8/16/13 9:42 PM:
--

bq. You closed bunch of bugs as fixed. Is it because the issues you observed no 
longer happen?
Earlier my question was not answered. Now that I understand the reason for 
closing the issues, the resolution is incorrect. It should be "not a problem". 
I am updating the resolution.

If this issue, based on discussions that is going on, is found to be a real 
issue, please re-open the jira again.

  was (Author: sureshms):
bq. You closed bunch of bugs as fixed. Is it because the issues you 
observed no longer happen?
Earlier my question was not answered. Now that, I understand the reason for 
closing the issues, the resolution is incorrect. It should be not a problem. I 
am updating the resolution.

If this issue, based on discussions that is going on, is found to be a real 
issue, please re-open the jira again.
  
> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9745.
-

Resolution: Not A Problem

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reopened HADOOP-9745:
-


> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742627#comment-13742627
 ] 

Suresh Srinivas commented on HADOOP-9745:
-

bq. You closed bunch of bugs as fixed. Is it because the issues you observed no 
longer happen?
Earlier my question was not answered. Now that, I understand the reason for 
closing the issues, the resolution is incorrect. It should be not a problem. I 
am updating the resolution.

If this issue, based on discussions that is going on, is found to be a real 
issue, please re-open the jira again.

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742613#comment-13742613
 ] 

Kihwal Lee commented on HADOOP-9880:


bq. However, the client-side does know how to deal with StandbyException (ie it 
tries on the other side). So we need to fix the client side to catch the 
InvalidToken unwrap the cause and then retry.

Isn't this patch already unwrap InvalidToken from server and throw the cause if 
it is set? So I thought clients would get StandbyException. Please correct me 
if I am wrong.

I've deployed a secure HA cluster with this patch and it appears to be working.

> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9743) TestStaticMapping test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742578#comment-13742578
 ] 

Elizabeth Thomas commented on HADOOP-9743:
--

[~jbonofre] I still these errors while doing a "mvn install" on the source code 
checked out from trunk.
Could you pls provide your pointers on how to resolve? Any patch to be applied?

> TestStaticMapping test fails
> 
>
> Key: HADOOP-9743
> URL: https://issues.apache.org/jira/browse/HADOOP-9743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - testCachingRelaysResolveQueries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)
>   - 
> testCachingCachesNegativeEntries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9843:
--

Status: Open  (was: Patch Available)

> Backport TestDiskChecker to branch-1.
> -
>
> Key: HADOOP-9843
> URL: https://issues.apache.org/jira/browse/HADOOP-9843
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test, util
>Affects Versions: 1.3.0
>Reporter: Chris Nauroth
>Assignee: Kousuke Saruta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-9843.patch
>
>
> In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
> {{DiskChecker}}.  It would be good to backport this test suite to branch-1 
> and branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9843) Backport TestDiskChecker to branch-1.

2013-08-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9843:
--

 Target Version/s: 1.3.0  (was: 1-win, 1.3.0)
Affects Version/s: (was: 1-win)

bq. I found TestDiskChecker have already been backported so the affects/target 
version may be only branch-1(1.3.0).

Yes, you're correct that it's already in branch-1-win.  I had been looking at 
an out-of-date fork.  I've changed the version to just 1.3.0.  Thanks!

bq. I have never seen the NullPointerException on Linux.

This is definitely throwing NPE on every run for me.  Can you try running the 
tests yourself again?  I know it appears that the patch passed Jenkins, but I'm 
really confused about what Jenkins did with this patch.  Jenkins generally only 
handles trunk patches, so I'm not sure how or why it replied with +1.


> Backport TestDiskChecker to branch-1.
> -
>
> Key: HADOOP-9843
> URL: https://issues.apache.org/jira/browse/HADOOP-9843
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test, util
>Affects Versions: 1.3.0
>Reporter: Chris Nauroth
>Assignee: Kousuke Saruta
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-9843.patch
>
>
> In trunk, we have the {{TestDiskChecker}} test suite to cover the code in 
> {{DiskChecker}}.  It would be good to backport this test suite to branch-1 
> and branch-1-win to get coverage of the code in those branches too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742562#comment-13742562
 ] 

Elizabeth Thomas commented on HADOOP-9745:
--

Oops, I included the wrong resolver. Including the correct resolver 
[~jbonofre]. 

Jean, could you pls provide pointers? I still see these failure in test cases 
when I do a "mvn install" on the source code.

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9742) TestTableMapping test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742565#comment-13742565
 ] 

Elizabeth Thomas commented on HADOOP-9742:
--

[~jbonofre] I still these errors while doing a "mvn install" on the source code 
checked out from trunk.

Could you pls help here on how you resolved them? Any service to be running on 
Ubuntu to get these test cases to pass?

> TestTableMapping test fails
> ---
>
> Key: HADOOP-9742
> URL: https://issues.apache.org/jira/browse/HADOOP-9742
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - testResolve(org.apache.hadoop.net.TestTableMapping): expected: 
> but was:
>   - testTableCaching(org.apache.hadoop.net.TestTableMapping): 
> expected: but was:
>   - testClearingCachedMappings(org.apache.hadoop.net.TestTableMapping): 
> expected: but was:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742557#comment-13742557
 ] 

Hadoop QA commented on HADOOP-9877:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598498/HADOOP-9877.v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2995//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2995//console

This message is automatically generated.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, 
> HADOOP-9877.v3.patch, HADOOP-9877.v4.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9873) hadoop-env.sh got called multiple times

2013-08-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9873:
--

Affects Version/s: 3.0.0

> hadoop-env.sh got called multiple times
> ---
>
> Key: HADOOP-9873
> URL: https://issues.apache.org/jira/browse/HADOOP-9873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-9873.patch
>
>
> Ref. below, it can be seen hadoop-env.sh got called multiple times when 
> running something like 'hadoop-daemon.sh start namenode'.
> {noformat}
> [drankye@zkdev ~]$ cd $HADOOP_PREFIX
> [drankye@zkdev hadoop-3.0.0-SNAPSHOT]$ grep -r hadoop-env *
> libexec/hadoop-config.sh:if [ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> libexec/hadoop-config.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> sbin/hadoop-daemon.sh:if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> sbin/hadoop-daemon.sh:  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> {noformat}
> Considering the following lines in hadoop-env.sh
> {code}
> # Command specific options appended to HADOOP_OPTS when specified
> export 
> HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
>  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} 
> $HADOOP_NAMENODE_OPTS"
> {code}
> It may end with some redundant result like below when called multiple times.
> {noformat}
> HADOOP_NAMENODE_OPTS='-Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS 
> -Dhdfs.audit.logger=INFO,NullAppender '
> {noformat}
> It's not a big issue for now however it would be better to be clean and avoid 
> this since it can cause the final JAVA command line is very lengthy and hard 
> to read.
> A possible fix would be to add a flag variable like HADOOP_ENV_INITED in 
> hadoop-env.sh, and then at the beginning of it check the flag. If the flag 
> evaluates true, then return immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742485#comment-13742485
 ] 

Hadoop QA commented on HADOOP-9866:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598504/HADOOP-9866.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2996//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2996//console

This message is automatically generated.

> convert hadoop-auth testcases requiring kerberos to use minikdc
> ---
>
> Key: HADOOP-9866
> URL: https://issues.apache.org/jira/browse/HADOOP-9866
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Alejandro Abdelnur
>Assignee: Wei Yan
> Attachments: HADOOP-9866.patch, HADOOP-9866.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-16 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742466#comment-13742466
 ] 

Wei Yan commented on HADOOP-9866:
-

update a new pacth according [~tucu00] comments.

For TestKerberosName, we don't need the krb5.conf. Only set system properties 
(realm and host) @Before and clear them in @After.

> convert hadoop-auth testcases requiring kerberos to use minikdc
> ---
>
> Key: HADOOP-9866
> URL: https://issues.apache.org/jira/browse/HADOOP-9866
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Alejandro Abdelnur
>Assignee: Wei Yan
> Attachments: HADOOP-9866.patch, HADOOP-9866.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-16 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HADOOP-9866:


Attachment: HADOOP-9866.patch

> convert hadoop-auth testcases requiring kerberos to use minikdc
> ---
>
> Key: HADOOP-9866
> URL: https://issues.apache.org/jira/browse/HADOOP-9866
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Alejandro Abdelnur
>Assignee: Wei Yan
> Attachments: HADOOP-9866.patch, HADOOP-9866.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742414#comment-13742414
 ] 

Binglin Chang commented on HADOOP-9877:
---

Upload v4 patch, add mock getFileLinkStatus to 
TestFsShellReturnCode.RawLocalFileSystemExtn

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, 
> HADOOP-9877.v3.patch, HADOOP-9877.v4.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9877:
--

Attachment: HADOOP-9877.v4.patch

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, 
> HADOOP-9877.v3.patch, HADOOP-9877.v4.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9745) TestZKFailoverController test fails

2013-08-16 Thread Elizabeth Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742387#comment-13742387
 ] 

Elizabeth Thomas commented on HADOOP-9745:
--

[~j...@nanthrax.net] I checked out the source code from the trunk 
(http://svn.apache.org/repos/asf/hadoop/common/trunk and revision 1514608) 
which is the latest successful build from the link 
https://builds.apache.org/job/Hadoop-trunk-Commit/4280/.

On running a "mvn install", the following tests fail, which includes the above 
three tests.
===
Results :

Failed tests:   
testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
 Did not fail to graceful failover when target failed to become active!
  
testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
 expected:<1> but was:<0>
  
testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
 Failover should have failed when old node wont fence
  testCachingRelaysResolveQueries(org.apache.hadoop.net.TestStaticMapping): 
Expected two entries in the map Mapping: cached switch mapping relaying to 
static mapping with single switch = false(..)
  testCachingCachesNegativeEntries(org.apache.hadoop.net.TestStaticMapping): 
Expected two entries in the map Mapping: cached switch mapping relaying to 
static mapping with single switch = false(..)
  testResolve(org.apache.hadoop.net.TestTableMapping): expected: but 
was:
  testTableCaching(org.apache.hadoop.net.TestTableMapping): expected: 
but was:
  testClearingCachedMappings(org.apache.hadoop.net.TestTableMapping): 
expected: but was:
  testNormalizeHostName(org.apache.hadoop.net.TestNetUtils): 
expected:<[67.215.65.145]> but was:<[UnknownHost123]>

Tests in error: 
  testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController): test 
timed out after 25000 milliseconds
  testChown(org.apache.hadoop.fs.TestFsShellReturnCode): test timed out after 
3 milliseconds
  testChgrp(org.apache.hadoop.fs.TestFsShellReturnCode): test timed out after 
3 milliseconds

Tests run: 2128, Failures: 9, Errors: 3, Skipped: 71
=
Should I checkout the code from elsewhere so that I could run these unit tests 
successfully? Or is there a pre-requisite/configuration for these tests (like 
any Unix service or any other apache services like ZooKeeper to be up) to run 
successfully on a Unix machine? I believe these unit tests should not have any 
dependencies. 

Could you pls provide your pointers?

> TestZKFailoverController test fails
> ---
>
> Key: HADOOP-9745
> URL: https://issues.apache.org/jira/browse/HADOOP-9745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jean-Baptiste Onofré
>
>   - 
> testGracefulFailoverFailBecomingActive(org.apache.hadoop.ha.TestZKFailoverController):
>  Did not fail to graceful failover when target failed to become active!
>   - 
> testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController):
>  expected:<1> but was:<0>
>   - 
> testGracefulFailoverFailBecomingStandbyAndFailFence(org.apache.hadoop.ha.TestZKFailoverController):
>  Failover should have failed when old node wont fence

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9880) SASL changes from HADOOP-9421 breaks Secure HA NN

2013-08-16 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742357#comment-13742357
 ] 

Sanjay Radia commented on HADOOP-9880:
--

Daryn, your hack is slightly more appealing. However, the client-side does know 
how to deal with StandbyException (ie it tries on the other side). So we need 
to fix the client side to catch the InvalidToken unwrap the cause and then 
retry.

BTW HDFS-3083 has a test and we need to run that test against this one verify 
that we have not regressed.



> SASL changes from HADOOP-9421 breaks Secure HA NN 
> --
>
> Key: HADOOP-9880
> URL: https://issues.apache.org/jira/browse/HADOOP-9880
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9880.patch
>
>
> buildSaslNegotiateResponse() will create a SaslRpcServer with TOKEN auth. 
> When create() is called against it, secretManager.checkAvailableForRead() is 
> called, which fails in HA standby. Thus HA standby nodes cannot be 
> transitioned to active.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742273#comment-13742273
 ] 

Hudson commented on HADOOP-9868:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1520 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1520/])
HADOOP-9868. Server must not advertise kerberos realm. Contributed by Daryn 
Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514448)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742279#comment-13742279
 ] 

Hudson commented on HADOOP-9865:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1520 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1520/])
HADOOP-9865.  FileContext#globStatus has a regression with respect to relative 
path.  (Contributed by Chaun Lin) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514531)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.3.0
>
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742201#comment-13742201
 ] 

Hudson commented on HADOOP-9868:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1493 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1493/])
HADOOP-9868. Server must not advertise kerberos realm. Contributed by Daryn 
Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514448)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742207#comment-13742207
 ] 

Hudson commented on HADOOP-9865:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1493 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1493/])
HADOOP-9865.  FileContext#globStatus has a regression with respect to relative 
path.  (Contributed by Chaun Lin) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514531)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.3.0
>
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742086#comment-13742086
 ] 

Hudson commented on HADOOP-9868:


SUCCESS: Integrated in Hadoop-Yarn-trunk #303 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/303/])
HADOOP-9868. Server must not advertise kerberos realm. Contributed by Daryn 
Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514448)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


> Server must not advertise kerberos realm
> 
>
> Key: HADOOP-9868
> URL: https://issues.apache.org/jira/browse/HADOOP-9868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9868.patch
>
>
> HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
> the kerberos service principal realm.  SASL clients and servers do not 
> support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742092#comment-13742092
 ] 

Hudson commented on HADOOP-9865:


SUCCESS: Integrated in Hadoop-Yarn-trunk #303 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/303/])
HADOOP-9865.  FileContext#globStatus has a regression with respect to relative 
path.  (Contributed by Chaun Lin) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514531)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


> FileContext.globStatus() has a regression with respect to relative path
> ---
>
> Key: HADOOP-9865
> URL: https://issues.apache.org/jira/browse/HADOOP-9865
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Fix For: 2.3.0
>
> Attachments: HADOOP-9865-demo.patch, HADOOP-9865-trunk.2.patch, 
> HADOOP-9865-trunk.3.patch, HADOOP-9865-trunk.patch
>
>
> I discovered the problem when running unit test TestMRJobClient on Windows. 
> The cause is indirect in this case. In the unit test, we try to launch a job 
> and list its status. The job failed, and caused the list command get a result 
> of 0, which triggered the unit test assert. From the log and debug, the job 
> failed because we failed to create the Jar with classpath (see code around 
> {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
> Windows specific step right now; so the test still passes on Linux. This step 
> failed because we passed in a relative path to {{FileContext.globStatus()}} 
> in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
> following.
> {noformat}
> 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
> launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
> container.
> org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
>   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
>   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
>   at 
> org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
>   at 
> org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> I think this is a regression from HADOOP-9817. I modified some code and the 
> unit test passed. (See the attached patch.) However, I think the impact is 
> larger. I will add some unit tests to verify the behavior, and work on a more 
> complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13742082#comment-13742082
 ] 

Hadoop QA commented on HADOOP-9877:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598391/HADOOP-9877.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestFsShellReturnCode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2994//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2994//console

This message is automatically generated.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, 
> HADOOP-9877.v3.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HADOOP-9877:
--

Attachment: HADOOP-9877.v3.patch

Add code to explicitly set the path of FileStatus like the original code.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, 
> HADOOP-9877.v3.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9877) hadoop fsshell can not ls .snapshot dir after HADOOP-9817

2013-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13741995#comment-13741995
 ] 

Hadoop QA commented on HADOOP-9877:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12598358/HADOOP-9877.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
  org.apache.hadoop.fs.TestFsShellReturnCode
  org.apache.hadoop.fs.TestGlobPaths

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2993//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2993//console

This message is automatically generated.

> hadoop fsshell can not ls .snapshot dir after HADOOP-9817
> -
>
> Key: HADOOP-9877
> URL: https://issues.apache.org/jira/browse/HADOOP-9877
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch
>
>
> {code}
> decster:~/hadoop> bin/hadoop fs -ls "/foo/.snapshot"
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/)
> 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo)
> ls: `/foo/.snapshot': No such file or directory
> {code}
> HADOOP-9817 refactor some globStatus code, but forgot to handle special case 
> that .snapshot dir is not show up in listStatus but exists, so we need to 
> explicitly check path existence using getFileStatus, rather than depending on 
> listStatus results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira