[jira] [Created] (HADOOP-9574) Adding new methods in AbstractDelegationTokenSecretManager for restoring RMDelegationTokens on RMRestart

2013-05-17 Thread Jian He (JIRA)
Jian He created HADOOP-9574:
---

 Summary: Adding new methods in 
AbstractDelegationTokenSecretManager for restoring RMDelegationTokens on 
RMRestart
 Key: HADOOP-9574
 URL: https://issues.apache.org/jira/browse/HADOOP-9574
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He


we're considering to add the following methods in 
AbstractDelegationTokenSecretManager for restoring RMDelegationTokens, these 
methods can also possibly be reused by hdfsDelegationTokenSecretManager, see 
YARN-638

  protected void storeNewMasterKey(DelegationKey key) throws IOException {
return;
  }
  protected void removeStoredMasterKey(DelegationKey key) {
return;
  }
  protected void storeNewToken(TokenIdent ident, long renewDate) {
return;
  }
  protected void removeStoredToken(TokenIdent ident) throws IOException {

  }
  protected void updateStoredToken(TokenIdent ident, long renewDate) {
return;
  }


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2013-05-17 Thread Jian He (JIRA)
Jian He created HADOOP-9576:
---

 Summary: Make NetUtils.wrapException throw EOFException instead of 
wrapping it as IOException
 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He


In case of EOFException, NetUtils is now wrapping it as IOException, we may 
want to throw EOFException as it is, since EOFException can happen when 
connection is lost in the middle, the client may want to explicitly handle such 
exception

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9589) Extra master key is created when AbstractDelegationTokenSecretManager is started

2013-05-21 Thread Jian He (JIRA)
Jian He created HADOOP-9589:
---

 Summary: Extra master key is created when 
AbstractDelegationTokenSecretManager is started
 Key: HADOOP-9589
 URL: https://issues.apache.org/jira/browse/HADOOP-9589
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


When AbstractDelegationTokenSecretManager starts , 
AbstractDelegationTokenSecretManager.startThreads().updateCurrentKey() creates 
the first master key. Immediately after that, ExpiredTokenRemover thread is 
started, and it will creates the second master by calling rollMasterKey on its 
first loop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9574) Add new methods in AbstractDelegationTokenSecretManager for restoring RMDelegationTokens on RMRestart

2013-05-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reopened HADOOP-9574:
-


TestDelegationToken.testRollMasterKey is possibly failing because of timing 
issue

 Add new methods in AbstractDelegationTokenSecretManager for restoring 
 RMDelegationTokens on RMRestart
 -

 Key: HADOOP-9574
 URL: https://issues.apache.org/jira/browse/HADOOP-9574
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9574.1.patch, HADOOP-9574.2.patch


 we're considering to add the following methods in 
 AbstractDelegationTokenSecretManager for restoring RMDelegationTokens, these 
 methods can also possibly be reused by hdfsDelegationTokenSecretManager, see 
 YARN-638
   protected void storeNewMasterKey(DelegationKey key) throws IOException {
 return;
   }
   protected void removeStoredMasterKey(DelegationKey key) {
 return;
   }
   protected void storeNewToken(TokenIdent ident, long renewDate) {
 return;
   }
   protected void removeStoredToken(TokenIdent ident) throws IOException {
   }
   protected void updateStoredToken(TokenIdent ident, long renewDate) {
 return;
   }
 Also move addPersistedDelegationToken in hdfs.DelegationTokenSecretManager, 
 to AbstractDelegationTokenSecretManager

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9907) Webapp http://hostname:port/metrics link is not working

2013-08-27 Thread Jian He (JIRA)
Jian He created HADOOP-9907:
---

 Summary: Webapp http://hostname:port/metrics  link is not working 
 Key: HADOOP-9907
 URL: https://issues.apache.org/jira/browse/HADOOP-9907
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He


This link is not working which just shows a blank page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-10519) In HDFS HA mode, Distcp/SLive with webhdfs on secure cluster fails with Client cannot authenticate via:[TOKEN, KERBEROS] error

2014-04-17 Thread Jian He (JIRA)
Jian He created HADOOP-10519:


 Summary: In HDFS HA mode, Distcp/SLive with webhdfs on secure 
cluster fails with Client cannot authenticate via:[TOKEN, KERBEROS] error
 Key: HADOOP-10519
 URL: https://issues.apache.org/jira/browse/HADOOP-10519
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He


Opening on [~arpitgupta]'s behalf.

We observed that, in HDFS HA mode, running Distcp/SLive with webhdfs will fail 
on YARN.  In non-HA mode, it'll pass. 

The reason is in HA mode, only webhdfs delegation token is generated for the 
job, but YARN also requires the regular hdfs to do localization, 
log-aggregation etc.
In non-HA mode, both tokens are generated for the job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11109) Site build is broken

2014-09-18 Thread Jian He (JIRA)
Jian He created HADOOP-11109:


 Summary: Site build is broken 
 Key: HADOOP-11109
 URL: https://issues.apache.org/jira/browse/HADOOP-11109
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store

2014-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reopened HADOOP-11017:
--

Took one more look. Looks like AbstractDelegationTokenSecretManager#addKey is 
changed to call {{storeDelegationKey(key);}} again. Not sure if this is 
intentional. 
Re-open this.  Please open a YARN jira, if YARN needs update. 

 KMS delegation token secret manager should be able to use zookeeper as store
 

 Key: HADOOP-11017
 URL: https://issues.apache.org/jira/browse/HADOOP-11017
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Fix For: 2.6.0

 Attachments: HADOOP-11017.1.patch, HADOOP-11017.2.patch, 
 HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, 
 HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, 
 HADOOP-11017.9.patch, HADOOP-11017.WIP.patch


 This will allow supporting multiple KMS instances behind a load balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-14116) FailoverOnNetworkExceptionRetry does not wait when failover on certain exception

2017-02-23 Thread Jian He (JIRA)
Jian He created HADOOP-14116:


 Summary: FailoverOnNetworkExceptionRetry does not wait when 
failover on certain exception 
 Key: HADOOP-14116
 URL: https://issues.apache.org/jira/browse/HADOOP-14116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


Below code, when doing failover, it does not wait like other condition does, 
which leads to a busy loop. 
{code}
   } else if (e instanceof SocketException
  || (e instanceof IOException && !(e instanceof RemoteException))) {
if (isIdempotentOrAtMostOnce) {
  return RetryAction.FAILOVER_AND_RETRY;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14080) UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ?

2017-02-13 Thread Jian He (JIRA)
Jian He created HADOOP-14080:


 Summary: UserGroupInformation#loginUserFromKeytab does not load 
hadoop tokens ? 
 Key: HADOOP-14080
 URL: https://issues.apache.org/jira/browse/HADOOP-14080
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Priority: Critical


Found that UserGroupInformation#loginUserFromKeytab will not try to load hadoop 
tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method does.  I 
know typically, if you have the keytab, you probably won't need the token, but  
is this an expected behavior ?

The problem with this is that suppose a long running app on YARN has its own 
keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
it will not load the hadoop tokens passed by YARN. 






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-03-08 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reopened HADOOP-14062:
--

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062.003.patch, HADOOP-14062-branch-2.8.0.004.patch, 
> HADOOP-14062-branch-2.8.0.005.patch, HADOOP-14062-branch-2.8.0.005.patch, 
> HADOOP-14062-branch-2.8.0.dummy.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)