[jira] [Updated] (HADOOP-10418) SaslRpcClient should not assume that remote principals are in the default_realm

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10418:
---

Hadoop Flags: Reviewed

+1 for the patch.  Thanks, Aaron!

> SaslRpcClient should not assume that remote principals are in the 
> default_realm
> ---
>
> Key: HADOOP-10418
> URL: https://issues.apache.org/jira/browse/HADOOP-10418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10418.patch
>
>
> We 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10191:
---

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2 and branch-2.4.  Gera, thank you for 
contributing this patch.

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10191:
---

Assignee: Gera Shegalov

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943916#comment-13943916
 ] 

Hudson commented on HADOOP-10191:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5382 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5382/])
HADOOP-10191. Missing executable permission on viewfs internal dirs. 
Contributed by Gera Shegalov. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1580170)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943894#comment-13943894
 ] 

stack commented on HADOOP-10410:


bq. Why not get some experimental results, and then decide whether to commit it?

[~xieliang007] Do you have s hacked up hbase patch that makes use of these 
nativeio additions?  Good on you.

> Support ioprio_set in NativeIO
> --
>
> Key: HADOOP-10410
> URL: https://issues.apache.org/jira/browse/HADOOP-10410
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HADOOP-10410.txt
>
>
> It would be better to HBase application if HDFS layer provide a fine-grained 
> IO request priority. Most of modern kernel should support ioprio_set system 
> call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943858#comment-13943858
 ] 

Hadoop QA commented on HADOOP-10416:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636165/c10416_20140321.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth:

  
org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3694//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3694//console

This message is automatically generated.

> If there is an expired token, PseudoAuthenticationHandler should renew it
> -
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It should also renew expired auth token if it is available in the 
> cookies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10418) SaslRpcClient should not assume that remote principals are in the default_realm

2014-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943849#comment-13943849
 ] 

Hadoop QA commented on HADOOP-10418:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636163/HADOOP-10418.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3693//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3693//console

This message is automatically generated.

> SaslRpcClient should not assume that remote principals are in the 
> default_realm
> ---
>
> Key: HADOOP-10418
> URL: https://issues.apache.org/jira/browse/HADOOP-10418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10418.patch
>
>
> We 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10416:
-

Status: Patch Available  (was: Open)

> If there is an expired token, PseudoAuthenticationHandler should renew it
> -
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It should also renew expired auth token if it is available in the 
> cookies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10416:
-

Attachment: c10416_20140321.patch

c10416_20140321.patch: renews expired tokens.

> If there is an expired token, PseudoAuthenticationHandler should renew it
> -
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It should also renew expired auth token if it is available in the 
> cookies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10418) SaslRpcClient should not assume that remote principals are in the default_realm

2014-03-21 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10418:


Status: Patch Available  (was: Open)

> SaslRpcClient should not assume that remote principals are in the 
> default_realm
> ---
>
> Key: HADOOP-10418
> URL: https://issues.apache.org/jira/browse/HADOOP-10418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10418.patch
>
>
> We 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10418) SaslRpcClient should not assume that remote principals are in the default_realm

2014-03-21 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10418:


Attachment: HADOOP-10418.patch

Straightforward patch attached to specify the correct name type for this 
principal, which will cause the JDK Kerberos library to use the domain_realm 
mapping to determine the correct realm.

No tests are included because of the difficulty of setting up an appropriate 
environment in the unit tests. I manually tested this and confirmed that it 
works as expected.

> SaslRpcClient should not assume that remote principals are in the 
> default_realm
> ---
>
> Key: HADOOP-10418
> URL: https://issues.apache.org/jira/browse/HADOOP-10418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10418.patch
>
>
> We 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10418) SaslRpcClient should not assume that remote principals are in the default_realm

2014-03-21 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-10418:
---

 Summary: SaslRpcClient should not assume that remote principals 
are in the default_realm
 Key: HADOOP-10418
 URL: https://issues.apache.org/jira/browse/HADOOP-10418
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


We 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Bowen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943820#comment-13943820
 ] 

Bowen Zhang commented on HADOOP-10398:
--

Overall, I think it's a bad design for oozie to use KerberosAuthenticator in a 
non-secure environment and expect hadoop client to fall back to 
PseudoAuthenticator.

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10417) There is no token for anonymous authentication

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10417:
-

Attachment: TestPseudoAuthenticator.patch

TestPseudoAuthenticator.patch: unit test for showing the problem.

> There is no token for anonymous authentication
> --
>
> Key: HADOOP-10417
> URL: https://issues.apache.org/jira/browse/HADOOP-10417
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
> Attachments: TestPseudoAuthenticator.patch
>
>
> According to [~tucu00], if ANONYMOUS is enabled, then there is a token 
> (cookie) and the response is 200.  However, it never sets cookie when the 
> token is ANONYMOUS in the code below.
> {code}
> //AuthenticationFilter.doFilter(..)
>   if (newToken && !token.isExpired() && token != 
> AuthenticationToken.ANONYMOUS) {
> String signedToken = signer.sign(token.toString());
> createAuthCookie(httpResponse, signedToken, getCookieDomain(),
> getCookiePath(), token.getExpires(), isHttps);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-10398.
--

Resolution: Invalid

Filed HADOOP-10416 and HADOOP-10417 for the server-side bugs.  Resolving this 
as invalid.

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10417) There is no token for anonymous authentication

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10417:
-

Component/s: security

> There is no token for anonymous authentication
> --
>
> Key: HADOOP-10417
> URL: https://issues.apache.org/jira/browse/HADOOP-10417
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>
> According to [~tucu00], if ANONYMOUS is enabled, then there is a token 
> (cookie) and the response is 200.  However, it never sets cookie when the 
> token is ANONYMOUS in the code below.
> {code}
> //AuthenticationFilter.doFilter(..)
>   if (newToken && !token.isExpired() && token != 
> AuthenticationToken.ANONYMOUS) {
> String signedToken = signer.sign(token.toString());
> createAuthCookie(httpResponse, signedToken, getCookieDomain(),
> getCookiePath(), token.getExpires(), isHttps);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10417) There is no token for anonymous authentication

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10417:


 Summary: There is no token for anonymous authentication
 Key: HADOOP-10417
 URL: https://issues.apache.org/jira/browse/HADOOP-10417
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze


According to [~tucu00], if ANONYMOUS is enabled, then there is a token (cookie) 
and the response is 200.  However, it never sets cookie when the token is 
ANONYMOUS in the code below.

{code}
//AuthenticationFilter.doFilter(..)
  if (newToken && !token.isExpired() && token != 
AuthenticationToken.ANONYMOUS) {
String signedToken = signer.sign(token.toString());
createAuthCookie(httpResponse, signedToken, getCookieDomain(),
getCookiePath(), token.getExpires(), isHttps);
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10416:


 Summary: If there is an expired token, PseudoAuthenticationHandler 
should renew it
 Key: HADOOP-10416
 URL: https://issues.apache.org/jira/browse/HADOOP-10416
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


PseudoAuthenticationHandler currently only gets username from the "user.name" 
parameter.  It should also renew expired auth token if it is available in the 
cookies.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943733#comment-13943733
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10398:
--

For simple, anonymous server, if the request has a expired token with username 
foo but user.name is not set, what should the username in the response token 
be?  foo or anonymous?

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943715#comment-13943715
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10398:
--

In AuthenticationFilter, it never sets cookie when token is ANONYMOUS.
{code}
//AuthenticationFilter.doFilter(..)
  if (newToken && !token.isExpired() && token != 
AuthenticationToken.ANONYMOUS) {
String signedToken = signer.sign(token.toString());
createAuthCookie(httpResponse, signedToken, getCookieDomain(),
getCookiePath(), token.getExpires(), isHttps);
  }
{code}

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943707#comment-13943707
 ] 

Chris Nauroth commented on HADOOP-10191:


bq. are you ever able to see the file status of an unresolved mount link.

Theoretically, yes, this is possible via things like the 
{{FileSystem#getFileLinkStatus}} API.  I don't see this overridden in 
{{ViewFileSystem}} though, which is possibly a distinct bug.  {{ViewFs}} 
appears to do it correctly.

Anyway, I'm still +1 for this patch.  I'll plan to commit later, and we can 
revisit discussion on some of these less critical aspects later if needed.

Thanks [~jira.shegalov]!

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943698#comment-13943698
 ] 

Gera Shegalov commented on HADOOP-10191:


Thanks for your comments, Chris, Sanjay!

[~sanjay.radia], I think [~cnauroth] points out that mount links per se are 
like symlinks, are not real files nor directories and therefore always have 
777. Unfortunately, I don't have much time right now to verify the usefulness 
of this modification, i.e., are you ever able to see the file status of an 
unresolved mount link. 

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10191:
---

Hadoop Flags: Reviewed

That seems inconsistent with the existing 777 symlinks, but I also don't think 
it's important enough to hold up a blocker fix for further debate.

+1 for the current patch.  I'll get this committed later today.

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10191) Missing executable permission on viewfs internal dirs

2014-03-21 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943680#comment-13943680
 ] 

Sanjay Radia commented on HADOOP-10191:
---

Chris, 0555 is more correct since the mount links are not writable (like other 
internal dirs).

> Missing executable permission on viewfs internal dirs
> -
>
> Key: HADOOP-10191
> URL: https://issues.apache.org/jira/browse/HADOOP-10191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-10191.v01.patch
>
>
> ViewFileSystem allows 1) unconditional listing of internal directories (mount 
> points) and 2) and changing work directories.
> 1) requires read permission
> 2) requires executable permission
> However, the hardcoded PERMISSION_RRR == 444 for FileStatus representing an 
> internal dir does not have executable bit set.
> This confuses YARN localizer for public resources on viewfs because it 
> requires executable permission for "other" on all of the ancestor directories 
> of the resource. 
> {code}
> java.io.IOException: Resource viewfs:/pubcache/cache.txt is not publicly 
> accessable and as such cannot be part of the public cache.
> at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:182)
> at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:51)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:279)
> at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:277)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943681#comment-13943681
 ] 

Todd Lipcon commented on HADOOP-10410:
--

Why not get some experimental results, and then decide whether to commit it?

In loaded systems I've seen OS-level queues get pretty long - not just the disk 
queues. And the disk queues can be tuned down.

Given that HBase usually does short-circuit IO, it seems highly likely to me 
that the combination of (a) HBase compactions setting themselves to low 
priority, (b) setting HBase "get" IO to be high priority, and (c) reducing the 
disk queue length should result in some noticeable improvements with very 
little code change. Doing our own IO scheduling, however, is a much larger 
project, which to be effective may also require other deep changes like 
switching to O_DIRECT IO, etc. The other issue with the QoS pool on the DN is 
that many performance-sensitive applications are now short-circuiting the DN 
entirely, meaning that _any_ QoS we do has to be on the OS level, not the DN 
level.

> Support ioprio_set in NativeIO
> --
>
> Key: HADOOP-10410
> URL: https://issues.apache.org/jira/browse/HADOOP-10410
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HADOOP-10410.txt
>
>
> It would be better to HBase application if HDFS layer provide a fine-grained 
> IO request priority. Most of modern kernel should support ioprio_set system 
> call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943584#comment-13943584
 ] 

Hadoop QA commented on HADOOP-9361:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636110/HADOOP-9361-007.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 75 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3692//console

This message is automatically generated.

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
> HADOOP-9361-006.patch, HADOOP-9361-007.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-03-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Status: Patch Available  (was: Open)

this patch will fail rename file-on-file operations in the local and rawlocal 
FS, as it expects the operations to fail -but for these two they don't

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
> HADOOP-9361-006.patch, HADOOP-9361-007.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-03-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: HADOOP-9361-007.patch

Updated patch

# explicit option for an FS to not fail on a {{seek()}} on a closed file, only 
on the read that follows.
# add raw local tests that bypass the {{ChecksumFilesystem}} -mostly to see 
where quirks lay -and a couple that go down to Java's {{File}} class just to 
make sure.
# more tests on rename behaviour

h2. Outstanding "ambiguities"

h3. Major: rename file onto file

On OS/X, LocalFS (and RawLocal) let you rename a source file over a destination 
file; the source data becomes accessible at the source path. This is what 
{{mv}} does at the command line, so presumably it's legitimate, even if HDFS 
and the blobstore APIs all reject this.

This is a major difference in semantics from HDFS and Posix filesystems -code 
I've written to assume that renaming a file fails if the destination exists 
works well to implement some implicit concurrency control on HDFS, but would 
not work on a Posix FS.

We have the option of making the rename operation stricter by explicitly adding 
a check into rawlocal, but there's still a race condition between any check and 
the OS-level rename action. 

What does that leave? It leaves documenting this somewhere for end-users.


h3. Minor:

RawLocal returns true if you attempt to delete a nonexistent path; everything 
else (including {{File.delete())}} returns true. This behaviour comes from 
{{FileUtil.fullyDelete(f)}}, which does not check for a file existence first, 
and when it gets false back from {{File.delete()}} generates the return code 
{{!File.exists()}}. That is, the semantics of fully delete are "return true if 
there is no directory at the end of the operation"


More subtly, it there is a small a race condition where you could accidentally 
recursively delete a directory by attempting to delete a nonexistent file while 
another process is creating a directory of the same name

This is because the check for {{!isFile()}} and !recursive are false when the 
file does not exist, but by the time the delete operation starts the rename has 
created a directory tree.

{code}
File f = pathToFile(p);
if (f.isFile()) {
  return f.delete();
} else if (!recursive && f.isDirectory() && 
(FileUtil.listFiles(f).length != 0)) {
  throw new IOException("Directory " + f.toString() + " is not empty");
}
return FileUtil.fullyDelete(f);
  }
{code}

Adding an existence check at the start of the sequence produces a consistent 
return code and eliminates this aspect of the race condition -this patch does 
exactly that.

There is still a minor race: adding an entry to a directory after the empty 
check and before the fullyDelete call. 

the recursive flag logic should really be moved into a {{FileUtil}} method 
itself.

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
> HADOOP-9361-006.patch, HADOOP-9361-007.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-03-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Target Version/s:   (was: )
  Status: Open  (was: Patch Available)

> Strictly define the expected behavior of filesystem APIs and write tests to 
> verify compliance
> -
>
> Key: HADOOP-9361
> URL: https://issues.apache.org/jira/browse/HADOOP-9361
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 2.2.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
> HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
> HADOOP-9361-006.patch
>
>
> {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
> HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
> don't.
> The only tests that are common are those of {{FileSystemContractTestBase}}, 
> which HADOOP-9258 shows is incomplete.
> I propose 
> # writing more tests which clarify expected behavior
> # testing operations in the interface being in their own JUnit4 test classes, 
> instead of one big test suite. 
> # Having each FS declare via a properties file what behaviors they offer, 
> such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
> methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943363#comment-13943363
 ] 

Haohui Mai commented on HADOOP-10410:
-

bq. If i want to let some online users read/write HBase more timely ...

That's exactly the point I'm making. The patch is for your particular use case 
on a specific platform of a specific setting (HBase on Linux). Though valuable 
to make the use case work, the HDFS APIs, however, have to carefully designed 
to support other use cases (e.g., MR) and other platforms.

I have raised my concerns that the design is yet to be completed. IMHO I don't 
think committing to any mechanisms like {{ioprio_*}} before going through the 
design is a good idea.

bq. Do you agree then that his suggestion of a QoS pool, HDFS-5727, the right 
way to proceed?

Detailed designs are welcome.

> Support ioprio_set in NativeIO
> --
>
> Key: HADOOP-10410
> URL: https://issues.apache.org/jira/browse/HADOOP-10410
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HADOOP-10410.txt
>
>
> It would be better to HBase application if HDFS layer provide a fine-grained 
> IO request priority. Most of modern kernel should support ioprio_set system 
> call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-7426:
--

Hadoop Flags: Reviewed

> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, c7426_20140321.patch, c7426_20140321b.patch, 
> viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-7426:
--

Attachment: c7426_20140321b.patch

Thanks for those additional fixes in the Federation document too.  The curly 
braces were just slightly off in the ViewFS link, so the hyperlink wasn't 
working correctly.  I'm uploading a patch to correct that.  I built the site 
with this version, and everything looked good.

I'm +1 for this version of the patch.


> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, c7426_20140321.patch, c7426_20140321b.patch, 
> viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943337#comment-13943337
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10398:
--

If we could fix the bug, yes.  Otherwise, using the work around may not be a 
bad idea.

[~bowenzhangusa], in your test, the web client is the Ooize CLI, what is the 
web server?  Is it a Oozie server or a NameNode?

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943307#comment-13943307
 ] 

stack commented on HADOOP-10410:


bq.  I hope we can just do technical discussion in the jira. 

We can.

[~xieliang007] I think [~wheat9] may not be up on how hbase uses hdfs.  Give 
him some slack.  Do you agree then that his suggestion of a QoS pool, 
HDFS-5727, the right way to proceed?

Thanks.

> Support ioprio_set in NativeIO
> --
>
> Key: HADOOP-10410
> URL: https://issues.apache.org/jira/browse/HADOOP-10410
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HADOOP-10410.txt
>
>
> It would be better to HBase application if HDFS layer provide a fine-grained 
> IO request priority. Most of modern kernel should support ioprio_set system 
> call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-7426:


Attachment: c7426_20140321.patch

Thanks Chris.  Here is a patch addressed your comments.

c7426_20140321.patch

> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, c7426_20140321.patch, viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-7426:
--

Component/s: documentation

> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-7426:
--

Target Version/s:   (was: )
  Status: Patch Available  (was: Open)

> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943245#comment-13943245
 ] 

Robert Kanter commented on HADOOP-10398:


{quote}But the special ANONYMOUS token is not found in Bowen's test. It is the 
bug.{quote}
Then shouldn't we fix this instead of doing a work around.

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943238#comment-13943238
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10398:
--

But the special ANONYMOUS token is not found in Bowen's test.  It is the bug.

This is the reason that my patch works (as a work around) -- it checks if there 
is a token set in the response.  If there is no token, fall back to 
PseudoAuthenticator.

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-7426) User Guide for how to use viewfs with federation

2014-03-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943233#comment-13943233
 ] 

Chris Nauroth commented on HADOOP-7426:
---

Sanjay and Nicholas, this looks great!  A few comments:
# There is mention of "merge mounts (not implemented yet)" and a reference to 
the appendix for more details, but the appendix doesn't discuss merge mounts 
any further.  Shall we just remove mention of merge mounts from the 
documentation for now?
# In the FAQ, questions 7 and 8 don't have answers.
# There are hyperlinks to Appendix A that don't actually jump to Appendix A 
when I click them, so I think their anchors are wrong.
# The ViewFS page links to the Federation page.  Similarly, I'd like for the 
Federation page to link to the ViewFS page, because the 2 features work well 
together.

> User Guide for how to use viewfs with federation
> 
>
> Key: HADOOP-7426
> URL: https://issues.apache.org/jira/browse/HADOOP-7426
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, viewfs
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Minor
> Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
> c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
> c7426_20111220.patch, c7426_20111220_site.tar.gz, c7426_20140320.patch, 
> c7426_20140320b.patch, viewfs_TypicalMountTable.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943227#comment-13943227
 ] 

Robert Kanter commented on HADOOP-10398:


{quote}What is the special token you are talking about?{quote}
If you look in the AuthenticationToken class, there's a token called 
{{ANONYMOUS}} and some special handling for it.  Also look at the 
AuthenticationFilter class.  

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943221#comment-13943221
 ] 

Suresh Srinivas commented on HADOOP-10015:
--

Comment:
isDebugEnabled check is needed around the section that prints debug log.

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Fix For: 3.0.0, 2.3.0, 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Bowen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943219#comment-13943219
 ] 

Bowen Zhang commented on HADOOP-10398:
--

[~tucu00], when we disable anonymous request, the code works since
{code}
if (conn.getResponseCode() == HttpURLConnection.HTTP_OK)
{code}
evaluates to false because we get 401 back. When we allow anonymous, the above 
if statement returns true but there is no token. What is the special token you 
are talking about?

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943169#comment-13943169
 ] 

Hadoop QA commented on HADOOP-10015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636029/10015.v5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3688//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3688//console

This message is automatically generated.

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Fix For: 3.0.0, 2.3.0, 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HADOOP-10015:
-

Fix Version/s: 2.3.0
   2.4.0
   3.0.0

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Fix For: 3.0.0, 2.3.0, 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HADOOP-10015:
-

Affects Version/s: 3.0.0

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13943116#comment-13943116
 ] 

Nicolas Liochon commented on HADOOP-10015:
--

So here is the v5, hopefully with a limited regression risk :-)

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HADOOP-10015:
-

Status: Patch Available  (was: Open)

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HADOOP-10015:
-

Status: Open  (was: Patch Available)

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Attachments: 10015.v3.patch, 10015.v4.patch, HADOOP-10015.000.patch, 
> HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HADOOP-10015:
-

Attachment: 10015.v5.patch

> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10150) Hadoop cryptographic file system

2014-03-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10150:


Attachment: cfs.patch
extended information based on INode feature.patch
HADOOP cryptographic file system-V2.docx

The update includes two part patches. 
Add “fs.encryption” and “fs.encryption.dirs” properties in core-site.xml. If 
“fs.encryption=true”, then filesystem is encrypted. “fs.encryption.dirs” 
indicates which directories are configured encrypted. Don’t modify 
URL(fs.defaultFS) in core-site.xml, and CFS is transparent to upper layer 
applications.

Each encrypted file has separate IV, and each configured encryption directory 
has data key. HDFS-2006 is expected and used to save IV and data key, but it’s 
not ready currently. So we implement extended information based on INode 
feature, and use it to store data key and IV. In our case, only directories and 
files which are configured encrypted need to use this feature, if there are 
1,000,000 files which are encrypted, about 8MB memory is required, so these 
information are stored in NN’s memory and will be serialized to edit log and 
finally in FSImage.

For key management, we use key provider API in HADOOP-10141, and Key rotation:  
data key will be decrypted using the original master key and then encrypted 
using the new master key.
For more information, please refer to the updated design doc.
The first part of patch is “extended information” based on INode feature, and 
used to save IV and data key. The second part patch is cfs patch.

I’m splitting these patches to the sub JIRAs.

> Hadoop cryptographic file system
> 
>
> Key: HADOOP-10150
> URL: https://issues.apache.org/jira/browse/HADOOP-10150
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 3.0.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system-V2.docx, HADOOP cryptographic file system.pdf, cfs.patch, extended 
> information based on INode feature.patch
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13942854#comment-13942854
 ] 

Jing Zhao commented on HADOOP-10410:


bq. Oops, are u kidding ? :)
I think [~wheat9] was doing a serious discussion here. I hope we can just do 
technical discussion in the jira. These words make people like me not very 
comfortable, even though you add a ":)" there. But of course maybe it's just 
because I'm also a non-native speaker.

> Support ioprio_set in NativeIO
> --
>
> Key: HADOOP-10410
> URL: https://issues.apache.org/jira/browse/HADOOP-10410
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HADOOP-10410.txt
>
>
> It would be better to HBase application if HDFS layer provide a fine-grained 
> IO request priority. Most of modern kernel should support ioprio_set system 
> call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)