[jira] [Commented] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-03-27 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385168#comment-14385168
 ] 

Gopal V commented on HADOOP-11771:
--

The cache is of dubious value for the default class-loader, is there a way to 
disable the cache altogether?

> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>   private static final Map>>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!
> !configuration-cache-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11773) KMSClientProvider#addDelegationTokens should use ActualUgi for proxy user authentication

2015-03-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11773:


This has been fixed by HADOOP-11482.

> KMSClientProvider#addDelegationTokens should use ActualUgi for proxy user 
> authentication
> 
>
> Key: HADOOP-11773
> URL: https://issues.apache.org/jira/browse/HADOOP-11773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> In a secure cluster with kerberos, when the 
> KMSClientProvider#addDelegationTokens is called in the context of a proxied 
> user, the SPNEGO authentication is currently made with the currentUser (the 
> proxied user) as the Principal, which does not have a kerberos ticket and 
> fail as shown below. This should be done in the context of the real user 
> represented by actualUgi introduced in HADOOP-11176. 
> The following stack was found from Hiveserver2 queries in a secure cluster. I 
> will post a patch for this shortly.
> {code}
> Forwardable Ticket true
> Forwarded Ticket false
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:337)
> ... 25 more
> Caused by: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:802)
> at 
> org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2046)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205)
> at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:442)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:588)
> ... 29 more
> 
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:348)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:794)
> ... 38 more
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> at 
> org.apache.hadoop.security.authentication.cl

[jira] [Commented] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-03-27 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385165#comment-14385165
 ] 

Arun Suresh commented on HADOOP-11771:
--

[~gopalv], looking at the code, It looks like there is a reason for 
CACHE_CLASSES to be a WeakHashMap (to be able to drop the cached classes when 
the classloader goes away) and there is a reason why we need it synchronized. 

Id say you could fix this using a {{ConcurrentWeakHashMap}} which we would have 
to probably write from scratch (might not be too hard, but I dont expect it to 
be trivial). Also you have to note that the actual concurrency level (atleast 
in the case of {{ConcurrentHashMap}}) has to be specified in the constructor 
when it is created, else it will be a default value for the entirety of its 
lifetime.

Alternatively, is it possible to reuse a value obtained from an earlier call to 
this method by the client code/framework/utility (which based on your stack 
trace is the hive planner)?  

> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>   private static final Map>>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!
> !configuration-cache-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11773) KMSClientProvider#addDelegationTokens should use ActualUgi for proxy user authentication

2015-03-27 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-11773.
-
Resolution: Duplicate

> KMSClientProvider#addDelegationTokens should use ActualUgi for proxy user 
> authentication
> 
>
> Key: HADOOP-11773
> URL: https://issues.apache.org/jira/browse/HADOOP-11773
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> In a secure cluster with kerberos, when the 
> KMSClientProvider#addDelegationTokens is called in the context of a proxied 
> user, the SPNEGO authentication is currently made with the currentUser (the 
> proxied user) as the Principal, which does not have a kerberos ticket and 
> fail as shown below. This should be done in the context of the real user 
> represented by actualUgi introduced in HADOOP-11176. 
> The following stack was found from Hiveserver2 queries in a secure cluster. I 
> will post a patch for this shortly.
> {code}
> Forwardable Ticket true
> Forwarded Ticket false
> at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:337)
> ... 25 more
> Caused by: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:802)
> at 
> org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2046)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205)
> at 
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:442)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:588)
> ... 29 more
> 
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:348)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:794)
> ... 38 more
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> at 
> org.apache.hadoop.security.authentication.client.Kerbe

[jira] [Created] (HADOOP-11773) KMSClientProvider#addDelegationTokens should use ActualUgi for proxy user authentication

2015-03-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-11773:
---

 Summary: KMSClientProvider#addDelegationTokens should use 
ActualUgi for proxy user authentication
 Key: HADOOP-11773
 URL: https://issues.apache.org/jira/browse/HADOOP-11773
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Blocker


In a secure cluster with kerberos, when the 
KMSClientProvider#addDelegationTokens is called in the context of a proxied 
user, the SPNEGO authentication is currently made with the currentUser (the 
proxied user) as the Principal, which does not have a kerberos ticket and fail 
as shown below. This should be done in the context of the real user represented 
by actualUgi introduced in HADOOP-11176. 

The following stack was found from Hiveserver2 queries in a secure cluster. I 
will post a patch for this shortly.

{code}
Forwardable Ticket true
Forwarded Ticket false
at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:337)
... 25 more
Caused by: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:802)
at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2046)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205)
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:442)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:588)
... 29 more


Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:196)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:348)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:794)
... 38 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)
at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:285)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:261)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.d

[jira] [Created] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-03-27 Thread Gopal V (JIRA)
Gopal V created HADOOP-11772:


 Summary: RPC Invoker relies on static ClientCache which has 
synchronized(this) blocks
 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Gopal V
 Attachments: sync-client-bt.png, sync-client-threads.png

{code}
  private static ClientCache CLIENTS=new ClientCache();
...
this.client = CLIENTS.getClient(conf, factory);
{code}

Meanwhile in ClientCache

{code}
public synchronized Client getClient(Configuration conf,
  SocketFactory factory, Class valueClass) {
...
   Client client = clients.get(factory);
if (client == null) {
  client = new Client(valueClass, conf, factory);
  clients.put(factory, client);
} else {
  client.incCount();
}
{code}

All invokers end up calling these methods, resulting in IPC clients choking up.

!sync-client-threads.png!
!sync-client-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-03-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11772:
-
Attachment: sync-client-threads.png
sync-client-bt.png

> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Gopal V
> Attachments: sync-client-bt.png, sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385115#comment-14385115
 ] 

Larry McCay commented on HADOOP-11717:
--

Hey [~drankye] - yeah, there is no reason to encrypt and decrypt here - the 
assumption is that they are to be protected with transport security.


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-03-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11771:
-
Description: 
{code}
  private static final Map>>>
CACHE_CLASSES = new WeakHashMap>>>();

...
 synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}

!configuration-sync-cache.png!

!configuration-cache-bt.png!

  was:
{code}
  private static final Map>>>
CACHE_CLASSES = new WeakHashMap>>>();

...
 synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}

!configuration-sync-cache.png!


> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>   private static final Map>>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!
> !configuration-cache-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-03-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11771:
-
Attachment: configuration-sync-cache.png
configuration-cache-bt.png

> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>   private static final Map>>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-03-27 Thread Gopal V (JIRA)
Gopal V created HADOOP-11771:


 Summary: Configuration::getClassByNameOrNull synchronizes on a 
static object
 Key: HADOOP-11771
 URL: https://issues.apache.org/jira/browse/HADOOP-11771
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gopal V


{code}
  private static final Map>>>
CACHE_CLASSES = new WeakHashMap>>>();

...
 synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}

!configuration-sync-cache.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11770) [Umbrella] locate static synchronized blocks in hadoop-common

2015-03-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11770:
-
Priority: Critical  (was: Major)

> [Umbrella] locate static synchronized blocks in hadoop-common
> -
>
> Key: HADOOP-11770
> URL: https://issues.apache.org/jira/browse/HADOOP-11770
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io, ipc
>Affects Versions: 2.7.0
>Reporter: Gopal V
>Priority: Critical
>
> There are several static synchronized blocks in the hadoop common 
> functionality that hurts any multi-threaded processing system which uses the 
> common APIs.
> Identify the static synchronized blocks and locate them for potential fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11770) [Umbrella] locate static synchronized blocks in hadoop-common

2015-03-27 Thread Gopal V (JIRA)
Gopal V created HADOOP-11770:


 Summary: [Umbrella] locate static synchronized blocks in 
hadoop-common
 Key: HADOOP-11770
 URL: https://issues.apache.org/jira/browse/HADOOP-11770
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf, io, ipc
Affects Versions: 2.7.0
Reporter: Gopal V


There are several static synchronized blocks in the hadoop common functionality 
that hurts any multi-threaded processing system which uses the common APIs.

Identify the static synchronized blocks and locate them for potential fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11744) Support OAuth2 in Hadoop

2015-03-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385098#comment-14385098
 ] 

Kai Zheng commented on HADOOP-11744:


To make things easy and not so heavy, HADOOP-11766 was opened for the generic 
token authentication support for Hadoop. OAuth2 support is a use case and can 
be implemented based on the generic and common facilities provided there.

> Support OAuth2 in Hadoop
> 
>
> Key: HADOOP-11744
> URL: https://issues.apache.org/jira/browse/HADOOP-11744
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Haohui Mai
>  Labels: gsoc2015, mentor
>
> OAuth2 is a standardize mechanism for authentication and authorization. A 
> notable use case of OAuth2 is SSO -- it would be nice to integrate OAuth2 
> with Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11766) Generic token authentication support for Hadoop

2015-03-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11766:
---
Assignee: (was: Kai Zheng)

> Generic token authentication support for Hadoop
> ---
>
> Key: HADOOP-11766
> URL: https://issues.apache.org/jira/browse/HADOOP-11766
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Kai Zheng
>
> As a major goal of Rhino project, we proposed *TokenAuth* effort in 
> HADOOP-9392, where it's to provide a common token authentication framework to 
> integrate multiple authentication mechanisms, by adding a new 
> {{AuthenticationMethod}} in lieu of {{KERBEROS}} and {{SIMPLE}}. To minimize 
> the required changes and risk, we thought of another approach to achieve the 
> general goals based on Kerberos as Kerberos itself supports a 
> pre-authentication framework in both spec and implementation, which was 
> discussed in HADOOP-10959 as *TokenPreauth*. In both approaches, we had 
> performed workable prototypes covering both command line console and Hadoop 
> web UI. 
> As HADOOP-9392 is rather lengthy and heavy, HADOOP-10959 is mostly focused on 
> the concrete implementation approach based on Kerberos, we open this for more 
> general and updated discussions about requirement, use cases, and concerns 
> for the generic token authentication support for Hadoop. We distinguish this 
> token from existing Hadoop tokens as the token in this discussion is majorly 
> for the initial and primary authentication. We will refine our existing codes 
> in HADOOP-9392 and HADOOP-10959, break them down into smaller patches based 
> on latest trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11769) Pluggable token encoder, decoder and validator

2015-03-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11769:
---
Description: This is to define a common token encoder, decoder and 
validator interfaces, considering token serialization and deserialization, 
encryption and decryption, signing and verifying, expiration and audience 
checking, and etc. By such APIs pluggable and configurable token encoder, 
decoder and validator will be implemented in other issues.  (was: This is to 
define a common token encoder and decoder interface, considering token 
serialization and deserialization, encryption and decryption, signing and 
verifying, expiration and audience checking, and etc. By such API pluggable and 
configurable token encoder and decoder will be implemented in other issue.)

> Pluggable token encoder, decoder and validator
> --
>
> Key: HADOOP-11769
> URL: https://issues.apache.org/jira/browse/HADOOP-11769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is to define a common token encoder, decoder and validator interfaces, 
> considering token serialization and deserialization, encryption and 
> decryption, signing and verifying, expiration and audience checking, and etc. 
> By such APIs pluggable and configurable token encoder, decoder and validator 
> will be implemented in other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11769) Pluggable token encoder, decoder and validator

2015-03-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11769:
---
Summary: Pluggable token encoder, decoder and validator  (was: Pluggable 
token encoder and decoder)

> Pluggable token encoder, decoder and validator
> --
>
> Key: HADOOP-11769
> URL: https://issues.apache.org/jira/browse/HADOOP-11769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is to define a common token encoder and decoder interface, considering 
> token serialization and deserialization, encryption and decryption, signing 
> and verifying, expiration and audience checking, and etc. By such API 
> pluggable and configurable token encoder and decoder will be implemented in 
> other issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385091#comment-14385091
 ] 

Kai Zheng commented on HADOOP-11717:


[~lmccay], looks like we don't consider token encryption and decryption, right ?

It's good to get this in since it gets the job well done. As I mentioned in 
this JIRA earlier and discussed in HADOOP-11766, we also have bunch of codes 
related this in TokenAuth related efforts. We're refining our existing codes 
and will break them down into smaller ones. We would incorporate this part 
rather than duplicating something. As already widely discussed and agreed, we 
need more generic token APIs and common pluggable and configurable facilities 
like token encoder, decoder and validation. We will refine our codes plus this 
work in tasks in HADOOP-11766. Thanks for the work !

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11769) Pluggable token encoder and decoder

2015-03-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11769:
--

 Summary: Pluggable token encoder and decoder
 Key: HADOOP-11769
 URL: https://issues.apache.org/jira/browse/HADOOP-11769
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This is to define a common token encoder and decoder interface, considering 
token serialization and deserialization, encryption and decryption, signing and 
verifying, expiration and audience checking, and etc. By such API pluggable and 
configurable token encoder and decoder will be implemented in other issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11768) A JWT token implementation of AuthToken API

2015-03-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11768:
--

 Summary: A JWT token implementation of AuthToken API
 Key: HADOOP-11768
 URL: https://issues.apache.org/jira/browse/HADOOP-11768
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This is to provide a JWT token implementation of {{AuthToken}} API utilizing 
some 3rd Java library for the support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11767) Genefic token API and representation

2015-03-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11767:
---
Description: This will abstract common token aspects and defines a generic 
token interface and representation, named {{AuthToken}}. A JWT token 
implementation of such API will be provided separately in another issue.  (was: 
This will abstract common token aspects and defines a generic token interface 
and representation. A JWT token implementation of such API will be provided 
separately in another issue.)

> Genefic token API and representation
> 
>
> Key: HADOOP-11767
> URL: https://issues.apache.org/jira/browse/HADOOP-11767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This will abstract common token aspects and defines a generic token interface 
> and representation, named {{AuthToken}}. A JWT token implementation of such 
> API will be provided separately in another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11767) Genefic token API and representation

2015-03-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11767:
--

 Summary: Genefic token API and representation
 Key: HADOOP-11767
 URL: https://issues.apache.org/jira/browse/HADOOP-11767
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This will abstract common token aspects and defines a generic token interface 
and representation. A JWT token implementation of such API will be provided 
separately in another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385075#comment-14385075
 ] 

Hadoop QA commented on HADOOP-11754:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707912/HADOOP-11754.002.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM
  
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//console

This message is automatically generated.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.ja

[jira] [Created] (HADOOP-11766) Generic token authentication support for Hadoop

2015-03-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11766:
--

 Summary: Generic token authentication support for Hadoop
 Key: HADOOP-11766
 URL: https://issues.apache.org/jira/browse/HADOOP-11766
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng


As a major goal of Rhino project, we proposed *TokenAuth* effort in 
HADOOP-9392, where it's to provide a common token authentication framework to 
integrate multiple authentication mechanisms, by adding a new 
{{AuthenticationMethod}} in lieu of {{KERBEROS}} and {{SIMPLE}}. To minimize 
the required changes and risk, we thought of another approach to achieve the 
general goals based on Kerberos as Kerberos itself supports a 
pre-authentication framework in both spec and implementation, which was 
discussed in HADOOP-10959 as *TokenPreauth*. In both approaches, we had 
performed workable prototypes covering both command line console and Hadoop web 
UI. 

As HADOOP-9392 is rather lengthy and heavy, HADOOP-10959 is mostly focused on 
the concrete implementation approach based on Kerberos, we open this for more 
general and updated discussions about requirement, use cases, and concerns for 
the generic token authentication support for Hadoop. We distinguish this token 
from existing Hadoop tokens as the token in this discussion is majorly for the 
initial and primary authentication. We will refine our existing codes in 
HADOOP-9392 and HADOOP-10959, break them down into smaller patches based on 
latest trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385071#comment-14385071
 ] 

Hadoop QA commented on HADOOP-11731:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707952/HADOOP-11731-05.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6019//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6019//console

This message is automatically generated.

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11743) maven doesn't clean all the site files

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385058#comment-14385058
 ] 

Allen Wittenauer commented on HADOOP-11743:
---

I cleaned up some of this as part of HADOOP-11553.

> maven doesn't clean all the site files
> --
>
> Key: HADOOP-11743
> URL: https://issues.apache.org/jira/browse/HADOOP-11743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Minor
>
> After building the site files, performing a mvn clean -Preleasedocs doesn't 
> actually clean everything up as git complains about untracked files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11743) maven doesn't clean all the site files

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11743:
--
Description: After building the site files, performing a mvn clean 
-Preleasedocs doesn't actually clean everything up as git complains about 
untracked files.  (was: After building the site files, performing a mvn clean 
doesn't actually clean everything up as git complains about untracked files.)

> maven doesn't clean all the site files
> --
>
> Key: HADOOP-11743
> URL: https://issues.apache.org/jira/browse/HADOOP-11743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Minor
>
> After building the site files, performing a mvn clean -Preleasedocs doesn't 
> actually clean everything up as git complains about untracked files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7781) Remove RecordIO

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7781:
-
Fix Version/s: (was: 0.24.0)

> Remove RecordIO
> ---
>
> Key: HADOOP-7781
> URL: https://issues.apache.org/jira/browse/HADOOP-7781
> Project: Hadoop Common
>  Issue Type: Task
>  Components: record
>Affects Versions: 0.24.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
>
> HADOOP-6155 deprecated RecordIO in 0.21. We should remove it from trunk, as 
> nothing anymore uses it and the tests are taking up resources.
> We should attempt to remove record IO and also check for any references to it 
> within the MR and HDFS projects. Meanwhile, Avro has come up as a fine 
> replacement for it, and has been use inside Hadoop now for quite a while.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6387) FsShell -getmerge source file pattern is broken

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6387:
-
Fix Version/s: (was: 0.23.2)
   (was: 0.24.0)

> FsShell -getmerge source file pattern is broken
> ---
>
> Key: HADOOP-6387
> URL: https://issues.apache.org/jira/browse/HADOOP-6387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>Assignee: Daryn Sharp
>Priority: Minor
> Attachments: HADOOP-6387.patch
>
>
> The FsShell -getmerge command doesn't work if the "source file pattern" 
> matches files. See below. If the current behavior is intended then we should 
> update the help documentation and java docs to match, but it would be nice if 
> the user could specify a set of files in a directory rather than just 
> directories.
> {code}
> $ hadoop fs -help getmerge
> -getmerge  :  Get all the files in the directories that 
>   match the source file pattern and merge and sort them to only
>   one file on local fs.  is kept.
> $ hadoop fs -ls
> Found 3 items
> -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/1.txt
> -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/2.txt
> -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/3.txt
> $ hadoop fs -getmerge /user/eli/*.txt sorted.txt
> $ cat sorted.txt
> cat: sorted.txt: No such file or directory
> $ hadoop fs -getmerge /user/eli/* sorted.txt
> $ cat sorted.txt
> cat: sorted.txt: No such file or directory
> $ hadoop fs -getmerge /user/* sorted.txt
> $ cat sorted.txt 
> 1
> 2
> 3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8698) Do not call unneceseary setConf(null) in Configured constructor

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8698:
-
Fix Version/s: (was: 3.0.0)
   (was: 0.24.0)

> Do not call unneceseary setConf(null) in Configured constructor
> ---
>
> Key: HADOOP-8698
> URL: https://issues.apache.org/jira/browse/HADOOP-8698
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.23.3, 3.0.0
>Reporter: Radim Kolar
>Assignee: Radim Kolar
>Priority: Minor
> Attachments: setconf-null.txt, setconf-null2.txt, setconf-null3.txt, 
> setconf-null4.txt
>
>
> no-arg constructor of /org/apache/hadoop/conf/Configured calls setConf(null). 
> This is unnecessary and it increases complexity of setConf() code because you 
> have to check for not null object reference before using it. Under normal 
> conditions setConf() is never called with null reference, so not null check 
> is unnecessary.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385047#comment-14385047
 ] 

Allen Wittenauer edited comment on HADOOP-11731 at 3/28/15 1:46 AM:


-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
* rebase after HADOOP-11553 got committed


was (Author: aw):
-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
 

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Status: Open  (was: Patch Available)

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Status: Patch Available  (was: Open)

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Attachment: HADOOP-11731-05.patch

-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
 

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8545:
-
Release Note: 
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:

```xml

fs.swift.impl
com.mirantis.fs.SwiftFileSystem


fs.swift.block.impl
 com.mirantis.fs.block.SwiftBlockFileSystem

```

In MapReduce job specify following configs for OpenStack Keystone 
authentication:

```java
conf.set("swift.auth.url", "http://172.18.66.117:5000/v2.0/tokens";);
conf.set("swift.tenant", "superuser");
conf.set("swift.username", "admin1");
conf.set("swift.password", "password");
conf.setInt("swift.http.port", 8080);
conf.setInt("swift.https.port", 443);
```

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration

  was:
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:

```xml

fs.swift.impl
com.mirantis.fs.SwiftFileSystem


fs.swift.block.impl
 com.mirantis.fs.block.SwiftBlockFileSystem

```

In MapReduce job specify following configs for OpenStack Keystone 
authentication:
```java
conf.set("swift.auth.url", "http://172.18.66.117:5000/v2.0/tokens";);
conf.set("swift.tenant", "superuser");
conf.set("swift.username", "admin1");
conf.set("swift.password", "password");
conf.setInt("swift.http.port", 8080);
conf.setInt("swift.https.port", 443);
```java

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration


> Filesystem Implementation for OpenStack Swift
> -
>
> Key: HADOOP-8545
> URL: https://issues.apache.org/jira/browse/HADOOP-8545
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 1.2.0, 2.0.3-alpha
>Reporter: Tim Miller
>Assignee: Dmitry Mezhensky
>  Labels: hadoop, patch
> Fix For: 2.3.0
>
> Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
> HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-030.patch, 
> HADOOP-8545-031.patch, HADOOP-8545-032.patch, HADOOP-8545-033.patch, 
> HADOOP-8545-034.patch, HADOOP-8545-035.patch, HADOOP-8545-035.patch, 
> HADOOP-8545-036.patch, HADOOP-8545-037.patch, HADOOP-8545-1.patch, 
> HADOOP-8545-10.patch, HADOOP-8545-11.patch, HADOOP-8545-12.patch, 
> HADOOP-8545-13.patch, HADOOP-8545-14.patch, HADOOP-8545-15.patch, 
> HADOOP-8545-16.patch, HADOOP-8545-17.patch, HADOOP-8545-18.patch, 
> HADOOP-8545-19.patch, HADOOP-8545-2.patch, HADOOP-8545-20.patch, 
> HADOOP-8545-21.patch, HADOOP-8545-22.patch, HADOOP-8545-23.patch, 
> HADOOP-8545-24.patch, HADOOP-8545-25.patch, HADOOP-8545-3.patch, 
> HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, 
> HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, 
> HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch, 
> HADOOP-8545.suresh.patch
>
>
> ,Add a filesystem implementation for OpenStack Swift object store, similar to 
> the one which exists today for S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385004#comment-14385004
 ] 

Hadoop QA commented on HADOOP-11761:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707936/HADOOP-11761-032715.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6018//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6018//console

This message is automatically generated.

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch, HADOOP-11761-032715.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384996#comment-14384996
 ] 

Zhijie Shen commented on HADOOP-11754:
--

Haohui, thank for the latest patch. It looks good to me. I applied the patch 
and try RM in an insecure mode, it wont' crash again. I tried timeline server 
in a secure mode, it had fallback to use random secret. [~vinodkv], do you want 
to take a second look.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.

[jira] [Created] (HADOOP-11765) Signal congestion on the DataNode

2015-03-27 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11765:
---

 Summary: Signal congestion on the DataNode
 Key: HADOOP-11765
 URL: https://issues.apache.org/jira/browse/HADOOP-11765
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Haohui Mai


The DataNode should signal congestion (i.e. "I'm too busy") in the PipelineAck 
using the mechanism introduced in HDFS-7270.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384985#comment-14384985
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. I don't think that's consistent with pre-2.7 behavior though.

Can you elaborate what is the expected behavior?

>From what I'm aware of this is the behavior since HADOOP-8857. HADOOP-10868 
>accidentally changes the behavior (which introduces a security vulnerability) 
>and it was fixed in HADOOP-11748. You can check 
>https://issues.apache.org/jira/browse/HADOOP-10670?focusedCommentId=14380372&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380372
> for more details.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: 

[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-27 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Attachment: HADOOP-11761-032715.patch

Thanks [~wheat9] for the review comments! I rechecked our solution to the (now 
test only) StringSignerSecretProvider. Since we're exempting 
StringSignerSecretProvider for findbugs I'm doing the same thing with 
FileSignerSecretProvider. 

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch, HADOOP-11761-032715.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384930#comment-14384930
 ] 

Haohui Mai commented on HADOOP-11761:
-

Cloning the bytes every time might be too expensive as {{getSecret()}} is 
called on every authentication request.  We can either disable the warnings or 
change the API to return a read-only {{ByteBuffer}}.

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8545:
-
Release Note: 
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:

```xml

fs.swift.impl
com.mirantis.fs.SwiftFileSystem


fs.swift.block.impl
 com.mirantis.fs.block.SwiftBlockFileSystem

```

In MapReduce job specify following configs for OpenStack Keystone 
authentication:
```java
conf.set("swift.auth.url", "http://172.18.66.117:5000/v2.0/tokens";);
conf.set("swift.tenant", "superuser");
conf.set("swift.username", "admin1");
conf.set("swift.password", "password");
conf.setInt("swift.http.port", 8080);
conf.setInt("swift.https.port", 443);
```java

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration

  was:
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:
...

fs.swift.impl
com.mirantis.fs.SwiftFileSystem


fs.swift.block.impl
 com.mirantis.fs.block.SwiftBlockFileSystem

...

In MapReduce job specify following configs for OpenStack Keystone 
authentication:
conf.set("swift.auth.url", "http://172.18.66.117:5000/v2.0/tokens";);
conf.set("swift.tenant", "superuser");
conf.set("swift.username", "admin1");
conf.set("swift.password", "password");
conf.setInt("swift.http.port", 8080);
conf.setInt("swift.https.port", 443);

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration


> Filesystem Implementation for OpenStack Swift
> -
>
> Key: HADOOP-8545
> URL: https://issues.apache.org/jira/browse/HADOOP-8545
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 1.2.0, 2.0.3-alpha
>Reporter: Tim Miller
>Assignee: Dmitry Mezhensky
>  Labels: hadoop, patch
> Fix For: 2.3.0
>
> Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
> HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-030.patch, 
> HADOOP-8545-031.patch, HADOOP-8545-032.patch, HADOOP-8545-033.patch, 
> HADOOP-8545-034.patch, HADOOP-8545-035.patch, HADOOP-8545-035.patch, 
> HADOOP-8545-036.patch, HADOOP-8545-037.patch, HADOOP-8545-1.patch, 
> HADOOP-8545-10.patch, HADOOP-8545-11.patch, HADOOP-8545-12.patch, 
> HADOOP-8545-13.patch, HADOOP-8545-14.patch, HADOOP-8545-15.patch, 
> HADOOP-8545-16.patch, HADOOP-8545-17.patch, HADOOP-8545-18.patch, 
> HADOOP-8545-19.patch, HADOOP-8545-2.patch, HADOOP-8545-20.patch, 
> HADOOP-8545-21.patch, HADOOP-8545-22.patch, HADOOP-8545-23.patch, 
> HADOOP-8545-24.patch, HADOOP-8545-25.patch, HADOOP-8545-3.patch, 
> HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, 
> HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, 
> HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch, 
> HADOOP-8545.suresh.patch
>
>
> ,Add a filesystem implementation for OpenStack Swift object store, similar to 
> the one which exists today for S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384867#comment-14384867
 ] 

Allen Wittenauer commented on HADOOP-11754:
---

bq.  All HDFS daemons will not fall back to random secret provider in secure 
mode, which is consistent with the existing behavior.

I don't think that's consistent with pre-2.7 behavior though.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.servi

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384864#comment-14384864
 ] 

Haohui Mai commented on HADOOP-11754:
-

The v2 patch moves the instance of {{SignerSecretProvider}} to {{HttpServer2}}, 
which allows HDFS / RM / AM / Timeline server to customize their needs. The 
default is to allow falling back to random secret if the provider fails to read 
the file. All HDFS daemons will not fall back to random secret provider in 
secure mode, which is consistent with the existing behavior.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.h

[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.002.patch

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1

[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384816#comment-14384816
 ] 

Allen Wittenauer commented on HADOOP-11764:
---

bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just need to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.

> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384816#comment-14384816
 ] 

Allen Wittenauer edited comment on HADOOP-11764 at 3/27/15 10:34 PM:
-

bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just needs to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.


was (Author: aw):
bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just need to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.

> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384803#comment-14384803
 ] 

Hudson commented on HADOOP-11639:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7449 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7449/])
HADOOP-11639. Clean up Windows native code compilation warnings related to 
Windows Secure Container Executor. Contributed by Remus Rusanu. (cnauroth: rev 
3836ad6c0b3331cf60286d134157c13985908230)
* hadoop-common-project/hadoop-common/src/main/winutils/client.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/systeminfo.c
* hadoop-common-project/hadoop-common/src/main/winutils/config.cpp
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/CHANGES.txt


> Clean up Windows native code compilation warnings related to Windows Secure 
> Container Executor.
> ---
>
> Key: HADOOP-11639
> URL: https://issues.apache.org/jira/browse/HADOOP-11639
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Remus Rusanu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
> HADOOP-11639.02.patch, HADOOP-11639.03.patch
>
>
> YARN-2198 introduced additional code in Hadoop Common to support the 
> NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
> compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11639:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed this to trunk, branch-2 and branch-2.7.  Remus, 
thank you for the contribution.  Kiran, thank you for helping with code review 
and testing.

> Clean up Windows native code compilation warnings related to Windows Secure 
> Container Executor.
> ---
>
> Key: HADOOP-11639
> URL: https://issues.apache.org/jira/browse/HADOOP-11639
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Remus Rusanu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
> HADOOP-11639.02.patch, HADOOP-11639.03.patch
>
>
> YARN-2198 introduced additional code in Hadoop Common to support the 
> NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
> compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HADOOP-11664.

Resolution: Fixed

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
> HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384755#comment-14384755
 ] 

Zhe Zhang commented on HADOOP-11664:


I agree. +1 on the patch; I committed it.

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
> HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-11664:
---
Fix Version/s: HDFS-7285

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
> HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384735#comment-14384735
 ] 

Anubhav Dhoot commented on HADOOP-11764:


bq If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH. Otherwise, we may still end up with native 
lib not found.
Hi Zhijie, I am guessing that this is not something needs to be done in this 
jira which tries to address the /tmp noexec problem, right?. 

> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen moved YARN-3331 to HADOOP-11764:


Target Version/s: 2.8.0  (was: 2.8.0)
 Key: HADOOP-11764  (was: YARN-3331)
 Project: Hadoop Common  (was: Hadoop YARN)

> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384585#comment-14384585
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. AuthenticationFilter check it customized secret exists (no matter it comes 
from secret file or directly put in the configuration) or not to decide 
failback to random secret no matter AuthenticationFilter is used in secure mode 
(Kerberos handler) or in insecure mode (Pseudo handler).

bq. AuthenticationFilter no longer accepts secret that is put inside the 
configuration file. It may not be the best practice, but it's a valid scenario 
before. AuthenticationFilter also forces the user to have the secret file in 
secure mode, and it's not able to failback to random secret.

We never support this use case. It is a misunderstanding of the code. See 
https://issues.apache.org/jira/browse/HADOOP-10670?focusedCommentId=14380372&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380372

For Timeline / RM server, it looks like we have a lot of customized use cases 
here. Looks like the right fix is to move the some of the code in HttpServer2 
and to allow customization.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-s

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384532#comment-14384532
 ] 

Zhijie Shen commented on HADOOP-11754:
--

Before 2.7:

* {{AuthenticationFilterInitializer}}, {{RMAuthenticationFilterInitializer}} 
and {{TimelineAuthenticationFilterInitializer}} read the secret file, but 
behave a bit different. {{FileSignerSecretProvider}} seems to choose the 
behavior of {{RMAuthenticationFilterInitializer}}. However, unlike 
{{RMAuthenticationFilterInitializer}}, {{AuthenticationFilterInitializer}} 
doesn't allow null secret file path, while 
{{TimelineAuthenticationFilterInitializer}} DOESN'T have default secret file 
path.

* {{AuthenticationFilter}} check it customized secret exists (no matter it 
comes from secret file or directly put in the configuration) or not to decide 
failback to random secret no matter {{AuthenticationFilter}} is used in secure 
mode (Kerberos handler) or in insecure mode (Pseudo handler).

After these changes in 2.7.

* {{RMAuthenticationFilterInitializer}}'s behavior is chosen as the standard.

* {{AuthenticationFilter}} no longer accepts secret that is put inside the 
configuration file. It may not be the best practice, but it's a valid scenario 
before. {{AuthenticationFilter}} also forces the user to have the secret file 
in secure mode, and it's not able to failback to random secret.

Talking about timeline server specifically, in the case of starting timeline 
server in secure mode with the default secret config, the following logic will 
happen:

1. It tries to read the secret file, but it doesn't exists.
2. It checks and finds it's a secure mode, and throws the exception, and 
consequently timeline server fails to start.

bq.  think it is a separate issue and we can look at it in a separate jira.

I'm afraid it's not a separate issue. This change is going to break the 
timeline server secure deployment.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCy

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384455#comment-14384455
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. I'm not sure why we want to prevent using the random secret in the secure 
mode. 

This is for fallback only. The behavior is consistent with the previous 
behavior. The authentication filter bails out when the secret is not found. 
This is true for both RM and other users of the authentication filters.

bq. As is mentioned above, it's an incompatible semantics change, which will 
break RM web interface and timeline server secure deployment. 

Can you be more specific? What are the behaviors before and after the changes?

bq. To be specific, timeline server never has a default secret file before. 
This patch will forces it to have one.

I'm confused. What does timeline server has to do with {{RMFilterInitializer}}? 
I think it is a separate issue and we can look at it in a separate jira.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationF

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384423#comment-14384423
 ] 

Zhijie Shen commented on HADOOP-11754:
--

To be specific, timeline server never has a default secret file before. This 
patch will forces it to have one.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java

[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384418#comment-14384418
 ] 

Larry McCay commented on HADOOP-11717:
--

Those findbug warnings are unrelated to this patch.

[~owen.omalley] - can you give this a review when you get a chance?

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384413#comment-14384413
 ] 

Hadoop QA commented on HADOOP-11717:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707839/HADOOP-11717-7.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//console

This message is automatically generated.

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384371#comment-14384371
 ] 

Colin Patrick McCabe commented on HADOOP-11758:
---

I also wonder if we can do tracing at a level slightly above writeChunk.  
writeChunk operates at the level of 512-byte chunks, but writes are often 
larger than that.

If you look here:
{code}
  private void writeChecksumChunks(byte b[], int off, int len)
  throws IOException {
sum.calculateChunkedSums(b, off, len, checksum, 0);
for (int i = 0; i < len; i += sum.getBytesPerChecksum()) {
  int chunkLen = Math.min(sum.getBytesPerChecksum(), len - i);
  int ckOffset = i / sum.getBytesPerChecksum() * getChecksumSize();
  writeChunk(b, off + i, chunkLen, checksum, ckOffset, getChecksumSize());
}
  }
{code}
you can see that if we do a 4 kilobyte read, writeChunk will get called 8 
times.  But really it would be better just to have one span representing the 
entire 4k write.

> Add options to filter out too much granular tracing spans
> -
>
> Key: HADOOP-11758
> URL: https://issues.apache.org/jira/browse/HADOOP-11758
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tracing
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: testWriteTraceHooks.html
>
>
> in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384334#comment-14384334
 ] 

Colin Patrick McCabe commented on HADOOP-11758:
---

Hmm.  The idea behind HTrace is not to trace every operation.  We should be 
tracing less than 1% of all operations  At that point, we wouldn't really have 
a problem with too many trace spans.

The only time you would turn on tracing for every operation is when doing 
debugging.  In that case it's like turning log4j up to TRACE level-- you know 
you're going to get swamped.

So basically I would argue that we already do have an option to filter out too 
many trace spans-- setting the trace sampler to ProbabilitySampler.

> Add options to filter out too much granular tracing spans
> -
>
> Key: HADOOP-11758
> URL: https://issues.apache.org/jira/browse/HADOOP-11758
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tracing
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: testWriteTraceHooks.html
>
>
> in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384318#comment-14384318
 ] 

Colin Patrick McCabe commented on HADOOP-11257:
---

Thanks for your quick action on this, [~cnauroth] and [~iwasakims].  +1

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
> HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
> HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Attachment: HADOOP-11717-7.patch

removed extraneous imports in test which were causing a build failure.

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Open  (was: Patch Available)

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384266#comment-14384266
 ] 

Harsh J commented on HADOOP-11760:
--

[~airbots] - Thanks for these fixes. In future please feel free to combine all 
found typo issues into a single JIRA and patch, to save both ends the extra 
overhead work that otherwise goes into committing trivial fixes.

> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Fix For: 2.8.0
>
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

Resubmitting the patch - shouldn't have caused the build to fail...

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Open  (was: Patch Available)

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384199#comment-14384199
 ] 

Hadoop QA commented on HADOOP-11717:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707807/HADOOP-11717-6.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6015//console

This message is automatically generated.

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384180#comment-14384180
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

bq. Should we throw an exception if we can't parse the version?

Nope, otherwise {{releasedocmaker.py --version trunk-win}} would fail.

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Attachment: HADOOP-11717-6.patch

Attaching new patch - it addresses:

* some minor changes to tests as requested by [~drankye]
* separation of certificate PEM parsing into CertificateUtil class
* more appropriate handling of token validation errors to reauthenticate rather 
than return a 403

I think that any additional refactoring can be done as a result of needing to 
leverage common code.



> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384151#comment-14384151
 ] 

Zhijie Shen commented on HADOOP-11754:
--

I'm not sure why we want to prevent using the random secret in the secure mode. 
As is mentioned above, it's an incompatible semantics change, which will break 
RM web interface and timeline server secure deployment. I don't think we have 
conveyed this secure setup requirement of secret file to the users (e.g., 
Ambari). [~vinodkv], any idea?
{code}
277 // Fallback to RandomeSignerSecretProvider if the secret file is
278 // unspecified in insecure mode
279 if (!isSecurityEnabled && config.getProperty(SIGNATURE_SECRET_FILE) 
==
280 null) {
281   name = "random";
282 }
{code}

{code}
289 if (!isSecurityEnabled) {
290   LOG.info("The signature secret of the authentication filter 
is " +
291"unspecified, falling back to use random 
secrets.");
292   provider = new RandomSignerSecretProvider();
293   provider.init(config, servletContext, validity);
294 } else {
295   throw e;
296 }
{code}

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secre

[jira] [Commented] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384110#comment-14384110
 ] 

Hadoop QA commented on HADOOP-11763:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707784/HADOOP-11763.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestRPCWaitForProxy

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6014//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6014//console

This message is automatically generated.

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
> Attachments: HADOOP-11763.patch
>
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCl

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384074#comment-14384074
 ] 

Kai Zheng commented on HADOOP-11754:


I checked the failures, they're caused by the patch. The cause is in 
{{TestKerberosAuthenticator}}, it's in secure mode, but no signature file 
property is set; in the patch, when in secure mode, it will use {{file}} type, 
without checking if the signature file property is set or not; so 
{{FileSignerSecretProvider}} will be used anyway, but in it, if no signature 
file property is set then no file reading will be tried thus no exception will 
happen. Therefore its {{getCurrentSecret()}} will return null even though its 
{{init()}} is successful.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webap

[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384072#comment-14384072
 ] 

Junping Du commented on HADOOP-10670:
-

Just found that HADOOP-11754 already there. Mark HADOOP-11763 as duplicated.

> Allow AuthenticationFilters to load secret from signature secret files
> --
>
> Key: HADOOP-10670
> URL: https://issues.apache.org/jira/browse/HADOOP-10670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
> HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
> hadoop-10670.patch
>
>
> In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
> to configure AuthenticationFilter for the required signature secret by 
> specifying signature.secret.file property. This improvement would also allow 
> this when AuthenticationFilterInitializer isn't used in situations like 
> webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Resolution: Duplicate
  Assignee: (was: Junping Du)
Status: Resolved  (was: Patch Available)

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
> Attachments: HADOOP-11763.patch
>
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384015#comment-14384015
 ] 

Junping Du commented on HADOOP-10670:
-

Gentlemen, just tracing from YARN test failures (TestDistributedShell) and 
found that this patch break RM get started in insecure model which is very 
risky to 2.7. I just filed HADOOP-11763 and deliver a quick patch to fix it 
(comment out the default value of 
"hadoop.http.authentication.signature.secret.file"). 
I am not sure if we can find some better way (like comments above - "modify the 
RM to avoid binding the filter when it is not in the secure mode") quickly. If 
not, let's go with the easy way like HADOOP-11763, or we should revert the 
change here for 2.7 release.
CC to [~vinodkv].

> Allow AuthenticationFilters to load secret from signature secret files
> --
>
> Key: HADOOP-10670
> URL: https://issues.apache.org/jira/browse/HADOOP-10670
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
> HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
> hadoop-10670.patch
>
>
> In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
> to configure AuthenticationFilter for the required signature secret by 
> specifying signature.secret.file property. This improvement would also allow 
> this when AuthenticationFilterInitializer isn't used in situations like 
> webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384012#comment-14384012
 ] 

Brahma Reddy Battula commented on HADOOP-11760:
---

Thanks a lot [~ozawa]!!!

> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Fix For: 2.8.0
>
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Priority: Blocker  (was: Major)

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: HADOOP-11763.patch
>
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Status: Patch Available  (was: Open)

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-11763.patch
>
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Attachment: HADOOP-11763.patch

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-11763.patch
>
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Summary: RM in insecure model get start failure after HADOOP-10670.  (was: 
TestDistributedShell get failed due to RM start failure.)

> RM in insecure model get start failure after HADOOP-10670.
> --
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Junping Du
>
> TestDistributedShell get failed due to RM start failure.
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
> on/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) TestDistributedShell get failed due to RM start failure.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Description: 
TestDistributedShell get failed due to RM start failure.
The log exception:
{code}
2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
Failed startup of context 
org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
on/target/classes/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
Caused by: java.lang.RuntimeException: Could not read signature secret file: 
/Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
... 23 more
{code}

  was:
The log exception:
{code}
2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
Failed startup of context 
org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.j

[jira] [Moved] (HADOOP-11763) TestDistributedShell get failed due to RM start failure.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du moved YARN-3408 to HADOOP-11763:
---

 Component/s: (was: resourcemanager)
  security
Target Version/s: 2.7.0  (was: 2.8.0)
 Key: HADOOP-11763  (was: YARN-3408)
 Project: Hadoop Common  (was: Hadoop YARN)

> TestDistributedShell get failed due to RM start failure.
> 
>
> Key: HADOOP-11763
> URL: https://issues.apache.org/jira/browse/HADOOP-11763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Junping Du
>
> The log exception:
> {code}
> 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
> Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
> at 
> org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> at org.mortbay.jetty.Server.doStart(Server.java:224)
> at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
> at 
> org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/jdu/hadoop-http-auth-signature-secret
> at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
> ... 23 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383985#comment-14383985
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* hadoop-common-project/hadoop-common/CHANGES.txt


> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383987#comment-14383987
 ] 

Hudson commented on HADOOP-11553:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* dev-support/shelldocs.py
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383986#comment-14383986
 ] 

Hudson commented on HADOOP-11748:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


> The secrets of auth cookies should not be specified in configuration in clear 
> text
> --
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11759:

Summary: Remove an extra parameter described in Javadoc of TockenCache  
(was: Javadoc of TockenCache has an extra parameter)

> Remove an extra parameter described in Javadoc of TockenCache
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11759.patch
>
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383958#comment-14383958
 ] 

Tsuyoshi Ozawa commented on HADOOP-11759:
-

The test failure is not related since the patch only includes a fix about 
javadoc. No test is needed with same reason. Committing this shortly.

> Javadoc of TockenCache has an extra parameter
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11759.patch
>
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383955#comment-14383955
 ] 

Hadoop QA commented on HADOOP-11759:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707740/HADOOP-11759.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
org.apache.hadoop.hdfs.TestDatanodeDeath

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6013//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6013//console

This message is automatically generated.

> Javadoc of TockenCache has an extra parameter
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11759.patch
>
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383941#comment-14383941
 ] 

Hudson commented on HADOOP-11748:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


> The secrets of auth cookies should not be specified in configuration in clear 
> text
> --
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383942#comment-14383942
 ] 

Hudson commented on HADOOP-11553:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-project/src/site/site.xml
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/pom.xml
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383940#comment-14383940
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* hadoop-common-project/hadoop-common/CHANGES.txt


> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383901#comment-14383901
 ] 

Hudson commented on HADOOP-11760:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7447/])
HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy 
Battula. (ozawa: rev e074952bd6bedf58d993bbea690bad08c9a0e6aa)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Fix For: 2.8.0
>
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11760:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for your 
contribution and thanks [~airbots] for your report!

> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Fix For: 2.8.0
>
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383884#comment-14383884
 ] 

Tsuyoshi Ozawa commented on HADOOP-11760:
-

+1, committing this shortly.

> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383875#comment-14383875
 ] 

Tsuyoshi Ozawa commented on HADOOP-11759:
-

+1, pending for Jenkins.

> Javadoc of TockenCache has an extra parameter
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11759.patch
>
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11760:

Summary: Fix typo of javadoc in DistCp  (was: Typo in DistCp.java)

> Fix typo of javadoc in DistCp
> -
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11760.patch
>
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >