[jira] [Updated] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2015-03-22 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-4210:
--
Environment: (was: CDH4.1.2)

 NameNode Format should not fail for DNS resolution on minority of JournalNode
 -

 Key: HDFS-4210
 URL: https://issues.apache.org/jira/browse/HDFS-4210
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, journal-node, namenode
Affects Versions: 2.6.0
Reporter: Damien Hardy
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-4210.001.patch


 Setting  : 
   qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
   cdh4master01 and cdh4master02 JournalNode up and running, 
   cdh4worker03 not yet provisionning (no DNS entrie)
 With :
 `hadoop namenode -format` fails with :
   12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
   ... 5 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.init(IPCLoggerChannel.java:161)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.init(QuorumJournalManager.java:104)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.init(QuorumJournalManager.java:93)
   ... 10 more
 I suggest that if quorum is up format should not fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4638) TransferFsImage should take Configuration as parameter

2015-03-22 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-4638:
--
Target Version/s: 3.0.0  (was: 3.0.0, 2.1.0-beta)

 TransferFsImage should take Configuration as parameter
 --

 Key: HDFS-4638
 URL: https://issues.apache.org/jira/browse/HDFS-4638
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-4638.patch


 TransferFsImage currently creates a new HdfsConfiguration object, rather than 
 taking one passed in. This means that using {{dfsadmin -fetchImage}}, you 
 can't pass a different timeout on the command line, since the Tool's 
 configuration doesn't get plumbed through.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7970) KMSClientProvider addDelegationToken does not notify callers when Auth failure is due to Proxy User configuration a

2015-03-22 Thread Arun Suresh (JIRA)
Arun Suresh created HDFS-7970:
-

 Summary: KMSClientProvider addDelegationToken does not notify 
callers when Auth failure is due to Proxy User configuration a 
 Key: HDFS-7970
 URL: https://issues.apache.org/jira/browse/HDFS-7970
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh


When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
correct configured, then the following is found in the RM logs :

{noformat}
Unable to add the application to the delegation token renewer.
java.io.IOException: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2056)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:620)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:617)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.obtainSystemTokensForUser(DelegationTokenRenewer.java:616)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.requestNewHdfsDelegationToken(DelegationTokenRenewer.java:585)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:455)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:809)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:790)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1684)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:869)
... 20 more
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
Authentication failed, status: 403, message: Forbidden
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:275)
at 
org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
at 

[jira] [Commented] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14374912#comment-14374912
 ] 

Hadoop QA commented on HDFS-7931:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706379/HDFS-7931.1.patch
  against trunk revision 4cd54d9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
  
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
  org.apache.hadoop.hdfs.TestEncryptionZones
  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
  org.apache.hadoop.hdfs.security.TestDelegationToken

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10023//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10023//console

This message is automatically generated.

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-7931.1.patch


 The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
 {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
 {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Status: Patch Available  (was: Open)

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-7931.1.patch


 The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
 {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
 {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7970) KMSClientProvider addDelegationToken does not notify callers when Auth failure is due to Proxy User configuration a

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7970:
--
Attachment: HDFS-7970.1.patch

Attaching patch to fix this.

 KMSClientProvider addDelegationToken does not notify callers when Auth 
 failure is due to Proxy User configuration a 
 

 Key: HDFS-7970
 URL: https://issues.apache.org/jira/browse/HDFS-7970
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
 Attachments: HDFS-7970.1.patch


 When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
 behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
 correct configured, then the following is found in the RM logs :
 {noformat}
 Unable to add the application to the delegation token renewer.
 java.io.IOException: java.lang.reflect.UndeclaredThrowableException
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2056)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:620)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:617)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.obtainSystemTokensForUser(DelegationTokenRenewer.java:616)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.requestNewHdfsDelegationToken(DelegationTokenRenewer.java:585)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:455)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:809)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:790)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1684)
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:869)
 ... 20 more
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 Authentication failed, status: 403, message: Forbidden
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:275)
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
 

[jira] [Updated] (HDFS-7970) KMSClientProvider addDelegationToken does not notify callers when Auth failure is due to Proxy User configuration a

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7970:
--
Status: Patch Available  (was: Open)

 KMSClientProvider addDelegationToken does not notify callers when Auth 
 failure is due to Proxy User configuration a 
 

 Key: HDFS-7970
 URL: https://issues.apache.org/jira/browse/HDFS-7970
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
 Attachments: HDFS-7970.1.patch


 When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
 behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
 correct configured, then the following is found in the RM logs :
 {noformat}
 Unable to add the application to the delegation token renewer.
 java.io.IOException: java.lang.reflect.UndeclaredThrowableException
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2056)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:620)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:617)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.obtainSystemTokensForUser(DelegationTokenRenewer.java:616)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.requestNewHdfsDelegationToken(DelegationTokenRenewer.java:585)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:455)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:809)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:790)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1684)
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:869)
 ... 20 more
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 Authentication failed, status: 403, message: Forbidden
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:275)
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
 

[jira] [Commented] (HDFS-7970) KMSClientProvider addDelegationToken does not notify callers when Auth failure is due to Proxy User configuration a

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14374865#comment-14374865
 ] 

Hadoop QA commented on HDFS-7970:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706378/HDFS-7970.1.patch
  against trunk revision 4cd54d9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms:

  org.apache.hadoop.crypto.key.kms.server.TestKMS

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10024//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10024//console

This message is automatically generated.

 KMSClientProvider addDelegationToken does not notify callers when Auth 
 failure is due to Proxy User configuration a 
 

 Key: HDFS-7970
 URL: https://issues.apache.org/jira/browse/HDFS-7970
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
 Attachments: HDFS-7970.1.patch


 When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
 behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
 correctly configured to allow yarn to proxy the required user, then the 
 following is found in the RM logs :
 {noformat}
 Unable to add the application to the delegation token renewer.
 java.io.IOException: java.lang.reflect.UndeclaredThrowableException
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
 at 
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
 at 
 org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
 ..
 ..
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:874)
 at 
 org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:869)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
 ... 21 more
 {noformat}
 This gives no information to the user as to why the call has failed, and 
 there is generally no way for an admin to know the the ProxyUser setting is 
 the issue without going thru the code.



--
This 

[jira] [Updated] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Attachment: HDFS-7931.1.patch

Attaching trivial patch..

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-7931.1.patch


 The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
 {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
 {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7970) KMSClientProvider addDelegationToken does not notify callers when Auth failure is due to Proxy User configuration a

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7970:
--
Description: 
When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
correctly configured to allow yarn to proxy the required user, then the 
following is found in the RM logs :

{noformat}
Unable to add the application to the delegation token renewer.
java.io.IOException: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
..
..
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:284)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:165)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:874)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:869)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
... 21 more
{noformat}

This gives no information to the user as to why the call has failed, and there 
is generally no way for an admin to know the the ProxyUser setting is the issue 
without going thru the code.

  was:
When a process such as YARN RM tries to create/renew a KMS DelegationToken on 
behalf of proxy user such as Llama/Impala and if the Proxy user rules are not 
correct configured, then the following is found in the RM logs :

{noformat}
Unable to add the application to the delegation token renewer.
java.io.IOException: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:887)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:132)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:129)
at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2056)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:620)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$2.run(DelegationTokenRenewer.java:617)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.obtainSystemTokensForUser(DelegationTokenRenewer.java:616)
at 

[jira] [Updated] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-22 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-6826:
--
Attachment: HDFS-6826.16.patch

Minor doc and function parameter changes to establish the fact that the default 
{{AccessControlEnforcer}} is also available to the implementation so that it 
may be used as a fallback mechanism if needed.  

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375113#comment-14375113
 ] 

Hadoop QA commented on HDFS-6826:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706395/HDFS-6826.16.patch
  against trunk revision 4cd54d9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10025//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10025//console

This message is automatically generated.

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7864) Erasure Coding: Update safemode calculation for striped blocks

2015-03-22 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14374823#comment-14374823
 ] 

GAO Rui commented on HDFS-7864:
---

[~jingzhao] Thank you very much. And .4 patch looks good to me. BTW, condition 
? result1 : result 2 code style is awesome!  I will try to create a separate 
Jira for unit tests of safe mode calculation, too.

 Erasure Coding: Update safemode calculation for striped blocks
 --

 Key: HDFS-7864
 URL: https://issues.apache.org/jira/browse/HDFS-7864
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: GAO Rui
 Attachments: HDFS-7864.1.patch, HDFS-7864.2.patch, HDFS-7864.3.patch, 
 HDFS-7864.4.patch


 We need to update the safemode calculation for striped blocks. Specifically, 
 each striped block now consists of multiple data/parity blocks stored in 
 corresponding DataNodes. The current code's calculation is thus inconsistent: 
 each striped block is only counted as 1 expected block, while each of its 
 member block may increase the number of received blocks by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5894) Refactor a private internal class DataTransferEncryptor.SaslParticipant

2015-03-22 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-5894:
--
Component/s: security

 Refactor a private internal class DataTransferEncryptor.SaslParticipant
 ---

 Key: HDFS-5894
 URL: https://issues.apache.org/jira/browse/HDFS-5894
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.0
Reporter: Hiroshi Ikeda
Assignee: Harsh J
Priority: Trivial
 Attachments: HDFS-5894.patch, HDFS-5894.patch, HDFS-5894.patch


 It is appropriate to use polymorphism for SaslParticipant instead of 
 scattering if-else statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375276#comment-14375276
 ] 

Hadoop QA commented on HDFS-4210:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12682752/HDFS-4210.001.patch
  against trunk revision b375d1f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10027//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10027//console

This message is automatically generated.

 NameNode Format should not fail for DNS resolution on minority of JournalNode
 -

 Key: HDFS-4210
 URL: https://issues.apache.org/jira/browse/HDFS-4210
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, journal-node, namenode
Affects Versions: 2.6.0
Reporter: Damien Hardy
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-4210.001.patch


 Setting  : 
   qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
   cdh4master01 and cdh4master02 JournalNode up and running, 
   cdh4worker03 not yet provisionning (no DNS entrie)
 With :
 `hadoop namenode -format` fails with :
   12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
 java.lang.IllegalArgumentException: Unable to construct journal, 
 qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
   ... 5 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.init(IPCLoggerChannel.java:161)
   at 
 org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.init(QuorumJournalManager.java:104)
   at 
 org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.init(QuorumJournalManager.java:93)
   ... 10 more
 I suggest that if quorum is up format should not fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7827) Erasure Coding: support striped blocks in non-protobuf fsimage

2015-03-22 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375309#comment-14375309
 ] 

Hui Zheng commented on HDFS-7827:
-

Hi Jing
Thank you for your help.The 004 patch looks good to me.
Also I have known where is not following the coding convention by the patch.

 Erasure Coding: support striped blocks in non-protobuf fsimage
 --

 Key: HDFS-7827
 URL: https://issues.apache.org/jira/browse/HDFS-7827
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Hui Zheng
 Attachments: HDFS-7827.000.patch, HDFS-7827.002.patch, 
 HDFS-7827.003.patch, HDFS-7827.004.patch


 HDFS-7749 only adds code to persist striped blocks to protobuf-based fsimage. 
 We should also add this support to the non-protobuf fsimage since it is still 
 used for use cases like offline image processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-7827) Erasure Coding: support striped blocks in non-protobuf fsimage

2015-03-22 Thread Hui Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7827 stopped by Hui Zheng.
---
 Erasure Coding: support striped blocks in non-protobuf fsimage
 --

 Key: HDFS-7827
 URL: https://issues.apache.org/jira/browse/HDFS-7827
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Hui Zheng
 Attachments: HDFS-7827.000.patch, HDFS-7827.002.patch, 
 HDFS-7827.003.patch, HDFS-7827.004.patch


 HDFS-7749 only adds code to persist striped blocks to protobuf-based fsimage. 
 We should also add this support to the non-protobuf fsimage since it is still 
 used for use cases like offline image processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-22 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375203#comment-14375203
 ] 

Harsh J commented on HDFS-7875:
---

The new error looks better, but do you feel its possible to also print the # of 
currently configured volumes, to add further context into the error?

Also two nits:

1. configurd - configured
2. Value configured is either 0 - Value configured is either less than 0 
(0 is a valid value)

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5894) Refactor a private internal class DataTransferEncryptor.SaslParticipant

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375202#comment-14375202
 ] 

Hadoop QA commented on HDFS-5894:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677596/HDFS-5894.patch
  against trunk revision b375d1f.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10026//console

This message is automatically generated.

 Refactor a private internal class DataTransferEncryptor.SaslParticipant
 ---

 Key: HDFS-5894
 URL: https://issues.apache.org/jira/browse/HDFS-5894
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.0
Reporter: Hiroshi Ikeda
Assignee: Harsh J
Priority: Trivial
 Attachments: HDFS-5894.patch, HDFS-5894.patch, HDFS-5894.patch


 It is appropriate to use polymorphism for SaslParticipant instead of 
 scattering if-else statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4638) TransferFsImage should take Configuration as parameter

2015-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375204#comment-14375204
 ] 

Hadoop QA commented on HDFS-4638:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575730/HDFS-4638.patch
  against trunk revision b375d1f.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10028//console

This message is automatically generated.

 TransferFsImage should take Configuration as parameter
 --

 Key: HDFS-4638
 URL: https://issues.apache.org/jira/browse/HDFS-4638
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-4638.patch


 TransferFsImage currently creates a new HdfsConfiguration object, rather than 
 taking one passed in. This means that using {{dfsadmin -fetchImage}}, you 
 can't pass a different timeout on the command line, since the Tool's 
 configuration doesn't get plumbed through.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-03-22 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375249#comment-14375249
 ] 

Demai Ni commented on HDFS-6994:


[~wangzw], that is great. Do we have a sub-jira for this feature? we can work 
together to speed up the process? And I can get someone to do the '3rd party' 
testing if helps. Thanks... demai

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-03-22 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375248#comment-14375248
 ] 

Demai Ni commented on HDFS-6994:


[~wangzw], that is great. Do we have a sub-jira for this feature? we can work 
together to speed up the process? And I can get someone to do the '3rd party' 
testing if helps. Thanks... demai

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7829) Code clean up for LocatedBlock

2015-03-22 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375247#comment-14375247
 ] 

Takanobu Asanuma commented on HDFS-7829:


Thank you for your all help, Jing!

 Code clean up for LocatedBlock
 --

 Key: HDFS-7829
 URL: https://issues.apache.org/jira/browse/HDFS-7829
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jing Zhao
Assignee: Takanobu Asanuma
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-7829.1.patch, HDFS-7829.2.patch, HDFS-7829.3.patch, 
 HDFS-7829.4.patch


 We can do some code cleanup for {{LocatedBlock}}, including:
 # Using a simple Builder pattern to avoid multiple constructors
 # Setting data fields like {{corrupt}} and {{offset}} to final



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7966) New Data Transfer Protocol via HTTP/2

2015-03-22 Thread SIVA NAGARAJU (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375371#comment-14375371
 ] 

SIVA NAGARAJU commented on HDFS-7966:
-

I AM ALSO INTRESTED IN BEING PART OF IT.
 I KNOW HDFS AND MAP REDUCE PROGRAMMING

 New Data Transfer Protocol via HTTP/2
 -

 Key: HDFS-7966
 URL: https://issues.apache.org/jira/browse/HDFS-7966
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Qianqian Shi
  Labels: gsoc, gsoc2015, mentor

 The current Data Transfer Protocol (DTP) implements a rich set of features 
 that span across multiple layers, including:
 * Connection pooling and authentication (session layer)
 * Encryption (presentation layer)
 * Data writing pipeline (application layer)
 All these features are HDFS-specific and defined by implementation. As a 
 result it requires non-trivial amount of work to implement HDFS clients and 
 servers.
 This jira explores to delegate the responsibilities of the session and 
 presentation layers to the HTTP/2 protocol. Particularly, HTTP/2 handles 
 connection multiplexing, QoS, authentication and encryption, reducing the 
 scope of DTP to the application layer only. By leveraging the existing HTTP/2 
 library, it should simplify the implementation of both HDFS clients and 
 servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-7971) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa moved HADOOP-11735 to HDFS-7971:
---

Component/s: (was: nfs)
 nfs
Key: HDFS-7971  (was: HADOOP-11735)
Project: Hadoop HDFS  (was: Hadoop Common)

 mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
 ---

 Key: HDFS-7971
 URL: https://issues.apache.org/jira/browse/HDFS-7971
 Project: Hadoop HDFS
  Issue Type: Task
  Components: nfs
Reporter: Kengo Seki
Assignee: Kengo Seki
Priority: Minor
 Attachments: HADOOP-11735.001.patch


 It should be removed because only hadoop-nfs will be left behind when parent 
 upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7971) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-7971:
-
Issue Type: Improvement  (was: Task)

 mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
 ---

 Key: HDFS-7971
 URL: https://issues.apache.org/jira/browse/HDFS-7971
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Kengo Seki
Assignee: Kengo Seki
Priority: Minor
 Attachments: HADOOP-11735.001.patch


 It should be removed because only hadoop-nfs will be left behind when parent 
 upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)