[jira] [Commented] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-04 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469175#comment-13469175
 ] 

Suresh Srinivas commented on HADOOP-8880:
-

+1 for the patch.

 Missing jersey jars as dependency in the pom causes hive tests to fail
 --

 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HADOOP-8880.patch


 ivy.xml has the dependency included where as the same dependency is not 
 updated in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8880:


Fix Version/s: 1-win
 Hadoop Flags: Reviewed
   Status: Patch Available  (was: Open)

I committed the patch. Thank you Giri.

 Missing jersey jars as dependency in the pom causes hive tests to fail
 --

 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 1-win

 Attachments: HADOOP-8880.patch, HADOOP-8880.patch


 ivy.xml has the dependency included where as the same dependency is not 
 updated in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469187#comment-13469187
 ] 

Hadoop QA commented on HADOOP-8880:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547687/HADOOP-8880.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1559//console

This message is automatically generated.

 Missing jersey jars as dependency in the pom causes hive tests to fail
 --

 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 1-win

 Attachments: HADOOP-8880.patch, HADOOP-8880.patch


 ivy.xml has the dependency included where as the same dependency is not 
 updated in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-04 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469209#comment-13469209
 ] 

Jianbin Wei commented on HADOOP-8608:
-

Comments as follows:

* I am kind of concerned about the lose precision from the conversion.  That 
is why I propose to give that control back to the caller to make the precision 
lose explicitly.  If we go this approach, at least I think we should document 
this precision lose.  Comments?
* The parsing of value is a bit loose.  For example, it cannot handle 10S or 
10 s.  A strict format can reduce errors but may be inflexible and the 
exception can be little harsh.  Or we need to document the expecting format is 
10s not 10 s nor 10sec.
* Also the parsing part relies on the enum order implicitly.  It works now.  
But it may bite us later.  A Pattern instead?
* It would be better to add the value of the timeduration property into LOG.
* Can you please change test from testTime to testTimeDuration for 
consistency?

{code}
conf.setStrings(test.time.str, new String[]{10S});
assertEquals(1L, conf.getTimeDuration(test.time.str, 30, 
MILLISECONDS));
{code}

It logs

2012-10-04 00:29:37,172 WARN  conf.Configuration 
(Configuration.java:getTimeDuration(1212)) - No unit for test.time.str assuming 
MILLISECONDS

Better as No unit for test.time.str (value: 10S), assuming MILLISECONDS or 
something like that with property value




 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
 Attachments: 8608-0.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-04 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-8608:
--

Attachment: 8608-1.patch

bq. I am kind of concerned about the lose precision from the conversion. That 
is why I propose to give that control back to the caller to make the precision 
lose explicitly. If we go this approach, at least I think we should document 
this precision lose. Comments?

The caller has exactly the same control in both models; the difference is 
whether the user should be notified because they misinterpreted the knob. A 
warning when a conversion loses some significant fraction of the value could be 
helpful, but is this common? When someone makes a typo and sets a timeout as 
10d instead of 10s, that seems more worthy of a warning, but the caller knows 
better than the config if the value is in range.

It's a matter of taste, but I'd prefer to leave it with the caller. That OK?

bq. The parsing of value is a bit loose. For example, it cannot handle 10S or 
10 s. A strict format can reduce errors but may be inflexible and the 
exception can be little harsh. Or we need to document the expecting format is 
10s not 10 s nor 10sec

The format is documented and I prefer strict units, but feel free to make the 
matching fuzzier.

bq. Also the parsing part relies on the enum order implicitly. It works now. 
But it may bite us later. A Pattern instead?

Bite how? The enum order is part of the spec. The parsing is dead-simple. This 
is a trivial feature; a complex implementation is unlikely to justify itself...

bq. It would be better to add the value of the timeduration property into LOG.
bq. Can you please change test from testTime to testTimeDuration for 
consistency?

Sure.

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
 Attachments: 8608-0.patch, 8608-1.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469344#comment-13469344
 ] 

Hudson commented on HADOOP-8783:


Integrated in Hadoop-Hdfs-trunk #1185 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1185/])
HADOOP-8783. Improve RPC.Server's digest auth (daryn) (Revision 1393483)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393483
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469386#comment-13469386
 ] 

Hudson commented on HADOOP-8783:


Integrated in Hadoop-Mapreduce-trunk #1216 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1216/])
HADOOP-8783. Improve RPC.Server's digest auth (daryn) (Revision 1393483)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393483
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-04 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469393#comment-13469393
 ] 

Daryn Sharp commented on HADOOP-8878:
-

+1 Although I'd suggest breaking out some of the code into a new method to 
allow a unit test to be written.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 to fail when security is on
 ---

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer should be Singleton

2012-10-04 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469413#comment-13469413
 ] 

Daryn Sharp commented on HADOOP-8852:
-

You raise good points and I've been wary of this class for awhile.  Before 
critiquing the patch, I now question the very existence of 
{{DelegationTokenRenewer}}. Here's why:

Like all hdfs filesystems, a valid tgt is required obtain a token from the NN.  
This calls into question the entire need for these filesystems to implicitly 
obtain a token.  If you can get a token, you don't need a token!

The api of this class also cannot handle multi-token filesystems so its not 
generic enough to be of general utility.  Furthermore, token renewal is 
correctly handled within jobs so there's no need for a filesystem to internally 
renew.  One could make the case for a more generalized renewal outside of yarn, 
but that would be another jira.

My recommendation is delete the class and all the logic in the filesystems that 
use it.

 DelegationTokenRenewer should be Singleton
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Updated description:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-8882:
---

 Summary: uppercase namenode host name causes fsck to fail when 
useKsslAuth is on
 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta


{code}
 public static void fetchServiceTicket(URL remoteHost) throws IOException {
if(!UserGroupInformation.isSecurityEnabled())
  return;

String serviceName = host/ + remoteHost.getHost();
{code}

the hostname should be converted to lower case. Saw this in branch 1, will look 
at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8882:


Affects Version/s: 1.1.0

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8882:


Affects Version/s: 1.0.3

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469479#comment-13469479
 ] 

Arpit Gupta commented on HADOOP-8882:
-

in branch-1.0 fsck uses fetchServiceTicket and there is no useKsslAuth option.

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469481#comment-13469481
 ] 

Arpit Gupta commented on HADOOP-8882:
-

in trunk fsck uses spengo and will be fixed by HADOOP-8878

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Summary: uppercase namenode hostname causes hadoop dfs calls with webhdfs 
filesystem and fsck to fail when security is on  (was: uppercase namenode 
hostname causes hadoop dfs calls with webhdfs filesystem to fail when security 
is on)

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469491#comment-13469491
 ] 

Arpit Gupta commented on HADOOP-8878:
-

This would also impact fsck as it goes through the same code path.

@Dayrn i will take a look and see what can be done to add the ability to write 
a test for it.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469492#comment-13469492
 ] 

Arpit Gupta commented on HADOOP-8878:
-

oops meant Daryn :)

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469497#comment-13469497
 ] 

Owen O'Malley commented on HADOOP-8878:
---

Yeah, I agree that it would be good to make a separate function.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469505#comment-13469505
 ] 

Steve Loughran commented on HADOOP-8882:


Arpit -you should really use {{toLowerCase(Locale.EN_US)}}, otherwise the case 
conversion will fail in places where upper/lower case rules are different 
(example: turkey)

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-04 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469531#comment-13469531
 ] 

Jianbin Wei commented on HADOOP-8608:
-

I am looking for some document like in TimeUnit.convert that includes examples 
and warning. Not warning in the code.

{code}
/**
  * Return time duration in the given time unit. Valid units are encoded in
  * properties as suffixes: nanoseconds (ns), microseconds (us), milliseconds
  * (ms), seconds (s), minutes (m), hours (h), and days (d).  
  * For example, the value can be 10ns, 10us, 10ms, 10s, and etc. 
  * Note getting time duration set in finer granularity as coarser granularity 
can lose precision.  
  * For example, if property example.duration is set to 999ms, 
  * ttgetTimeDuration(example.duration, 1L, TimeUnit.SECONDS)/tt returns 
0.
{code}

Some other minor points:
* In {{ParsedTimeDuration.unitFor}} the {{return null;}} is never executed.
* In the same method, you used pdt. I guess you mean ptd.

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
 Attachments: 8608-0.patch, 8608-1.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-04 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8881:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to trunk and branch-2

 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469542#comment-13469542
 ] 

Hudson commented on HADOOP-8881:


Integrated in Hadoop-Common-trunk-Commit #2811 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2811/])
HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be 
debug not info. (tucu) (Revision 1394165)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394165
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java


 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469546#comment-13469546
 ] 

Hudson commented on HADOOP-8881:


Integrated in Hadoop-Hdfs-trunk-Commit #2873 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2873/])
HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be 
debug not info. (tucu) (Revision 1394165)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394165
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java


 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer should be Singleton

2012-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469569#comment-13469569
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

Isn't it more efficient to renew a token than to get a new token every time? I 
agree we can get the first token lazily, instead of implicitly obtaining a 
token during {{FileSystem#init()}}

For FileSystems, is there a penalty for not renewing the tokens within the 
renewal period? If there is no penalty, it might actually be detrimental to 
periodically renew even in the absence of any activity.

On the other hand, if there is a penalty, I see some merit to this class. In a 
world with several filesystems, MR jobs, and other external entities, this 
class can enable renewing tokens sequentially in the order the systems come up 
for renewal. 


 DelegationTokenRenewer should be Singleton
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Updated description:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469576#comment-13469576
 ] 

Hudson commented on HADOOP-8881:


Integrated in Hadoop-Mapreduce-trunk-Commit #2835 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2835/])
HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be 
debug not info. (tucu) (Revision 1394165)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394165
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java


 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8591) TestZKFailoverController tests time out

2012-10-04 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8591:


Labels: test-fail  (was: )

 TestZKFailoverController tests time out
 ---

 Key: HADOOP-8591
 URL: https://issues.apache.org/jira/browse/HADOOP-8591
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover, ha, test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: test-fail

 Looks like the TestZKFailoverController timeout needs to be bumped.
 {noformat}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Object.wait(Native Method)
   at 
 org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:460)
   at 
 org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:648)
   at 
 org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:58)
   at 
 org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:593)
   at 
 org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:590)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1334)
   at 
 org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:590)
   at 
 org.apache.hadoop.ha.TestZKFailoverController.testOneOfEverything(TestZKFailoverController.java:575)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Status: Open  (was: Patch Available)

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Attachment: HADOOP-8878.branch-1.patch

branch-1 patch where i have created a method in KerberosUtil to get service 
principal. Let me know this approach looks good and will do the same for trunk.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8883) Anonymous fallback in KerberosAuthenticator is broken

2012-10-04 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-8883:
-

 Summary: Anonymous fallback in KerberosAuthenticator is broken
 Key: HADOOP-8883
 URL: https://issues.apache.org/jira/browse/HADOOP-8883
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor
 Fix For: 2.0.3-alpha


HADOOP-8855 changed KerberosAuthenticator to handle when the JDK did the SPNEGO 
already; but this change broke using the fallback authenticator 
(PseudoAuthenticator) with an anonymous user (see OOZIE-1010).  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8877) Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for clarity

2012-10-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8877:


Assignee: Karthik Kambatla

 Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for 
 clarity
 

 Key: HADOOP-8877
 URL: https://issues.apache.org/jira/browse/HADOOP-8877
 Project: Hadoop Common
  Issue Type: Wish
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Trivial

 While browsing through the code, I came across the TrivialRenewer. It would 
 definitely be easy to comprehend if we rename it to UnmanagedRenewer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8877) Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for clarity

2012-10-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8877:
-

Attachment: hadoop-8877.patch

Trivial patch that renames TrivialRenewer to UnmanagedRenewer.

No tests, because the logic hasn't changed in anyway.

 Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for 
 clarity
 

 Key: HADOOP-8877
 URL: https://issues.apache.org/jira/browse/HADOOP-8877
 Project: Hadoop Common
  Issue Type: Wish
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Trivial
 Attachments: hadoop-8877.patch


 While browsing through the code, I came across the TrivialRenewer. It would 
 definitely be easy to comprehend if we rename it to UnmanagedRenewer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8877) Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for clarity

2012-10-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8877:
-

Affects Version/s: 2.0.1-alpha
   Status: Patch Available  (was: Open)

 Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for 
 clarity
 

 Key: HADOOP-8877
 URL: https://issues.apache.org/jira/browse/HADOOP-8877
 Project: Hadoop Common
  Issue Type: Wish
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Trivial
 Attachments: hadoop-8877.patch


 While browsing through the code, I came across the TrivialRenewer. It would 
 definitely be easy to comprehend if we rename it to UnmanagedRenewer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-04 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469747#comment-13469747
 ] 

Chris Douglas commented on HADOOP-8608:
---

The caller requests a unit and the function returns {{long}}. Seems pretty 
straightforward.

bq. In ParsedTimeDuration.unitFor the return null; is never executed.

It's required.

bq. In the same method, you used pdt. I guess you mean ptd.

...

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
 Attachments: 8608-0.patch, 8608-1.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-04 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469779#comment-13469779
 ] 

Arpit Gupta commented on HADOOP-8878:
-

Here is the output of test patch for branch 1

{code}
[exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 2 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 9 new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] 
{code}

Findbugs warnings are unrelated to this patch.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8877) Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for clarity

2012-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469825#comment-13469825
 ] 

Hadoop QA commented on HADOOP-8877:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547821/hadoop-8877.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1560//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1560//console

This message is automatically generated.

 Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for 
 clarity
 

 Key: HADOOP-8877
 URL: https://issues.apache.org/jira/browse/HADOOP-8877
 Project: Hadoop Common
  Issue Type: Wish
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Trivial
 Attachments: hadoop-8877.patch


 While browsing through the code, I came across the TrivialRenewer. It would 
 definitely be easy to comprehend if we rename it to UnmanagedRenewer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-04 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469921#comment-13469921
 ] 

Eli Collins commented on HADOOP-8804:
-

Looks good Senthil! If you address the following two nits I'll commit.  The 
test failure is unrelated.

- Need a space after if in if(I, and conditionals should use braces
- New testSimpleHostName lines should wrap at 80 chars, eg the 2nd and 3rd args 
can wrap to the next line




 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer should be Singleton

2012-10-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469929#comment-13469929
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

Offline discussion with ATM has helped me understand this better.

Gist for others in my position (without a clear understanding of Kerberos):
# Initial authentication is indeed expensive and involves communicating with 
KDC, one gets Krb ticket upon authentication.
# Subsequent authentications with this Krb ticket are quick and don't need KDC 
communication.
# Hadoop tokens will not make these subsequent authentications any more 
efficient, because functionally this is same as the above authentication. 
Tokens are useful when one doesn't have tickets - e.g., MR tasks that execute 
on different nodes.

As per Daryn's suggestion, I ll go ahead and remove DelegationTokenRenewer and 
the relevant logic from these filesystems.

Thanks Daryn and ATM.

 DelegationTokenRenewer should be Singleton
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Updated description:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) Remove DelegationTokenRenewer

2012-10-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Summary: Remove DelegationTokenRenewer  (was: DelegationTokenRenewer should 
be Singleton)

 Remove DelegationTokenRenewer
 -

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Updated description:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) Remove DelegationTokenRenewer

2012-10-04 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Description: 
Update 2:
DelegationTokenRenewer is not required. The filesystems that are using it 
already have Krb tickets and do not need tokens. Remove DelegationTokenRenewer 
and all the related logic from WebHdfs and Hftp filesystems.

Update1:
DelegationTokenRenewer should be Singleton - the instance and renewer threads 
should be created/started lazily. The filesystems using the renewer shouldn't 
need to explicity start/stop the renewer, and only register/de-register for 
token renewal.

Original issue:
HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
thread when they are closed. 

  was:
Updated description:
DelegationTokenRenewer should be Singleton - the instance and renewer threads 
should be created/started lazily. The filesystems using the renewer shouldn't 
need to explicity start/stop the renewer, and only register/de-register for 
token renewal.

Original issue:
HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
thread when they are closed. 


 Remove DelegationTokenRenewer
 -

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Update 2:
 DelegationTokenRenewer is not required. The filesystems that are using it 
 already have Krb tickets and do not need tokens. Remove 
 DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
 filesystems.
 Update1:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-04 Thread Senthil V Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthil V Kumar updated HADOOP-8804:


Attachment: HADOOP-8804-trunk.patch
HADOOP-8804-1.1.patch

Incorporating Eli's comments

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira