[ 
https://issues.apache.org/jira/browse/HDFS-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845014#comment-13845014
 ] 

Junping Du commented on HDFS-5637:
----------------------------------

The purpose of refactoring is trying to keep consistent on exception handling 
for different cases. If we think abstract a method for twice usage is too 
overkill, I think we should at least add some comments, so later developer and 
reviewer should pay attention on this when adding additional exceptions. 
Thoughts?

> try to refeatchToken while local read InvalidToken occurred
> -----------------------------------------------------------
>
>                 Key: HDFS-5637
>                 URL: https://issues.apache.org/jira/browse/HDFS-5637
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client, security
>    Affects Versions: 2.0.5-alpha, 2.2.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>         Attachments: HDFS-5637-v2.txt, HDFS-5637.txt
>
>
> we observed several warning logs like below from region server nodes:
> 2013-12-05,13:22:26,042 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /10.2.201.110:11402 for block, add to deadNodes and continue. 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with 
> block_token_identifier (expiryDate=1386060141977, keyId=-333530248, 
> userId=hbase_srv, blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>         at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:771)
>         at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
>         at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
>         at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
>         at java.io.DataInputStream.read(DataInputStream.java:132)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:614)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1384)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1829)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1673)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:341)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:485)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:535)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:246)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
>         at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:352)
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:292)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:586)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:446)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:349)
>         at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1660)
>         at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1080)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1244)
>         at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:258)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  Block token with block_token_identifier (expiryDate=1386060141977, 
> keyId=-333530248, userId=hbase_srv, 
> blockPoolId=BP-1310313570-10.101.10.66-1373527541386, 
> blockId=-190217754078101701, access modes=[READ]) is expired.
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280)
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:88)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkBlockToken(DataNode.java:1082)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1033)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:5104)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1167)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>         at $Proxy21.getBlockLocalPathInfo(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:215)
>         at 
> org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:254)
>         at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:167)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:767)
>         ... 28 more
> 2013-12-05,13:22:26,047 INFO org.apache.hadoop.hdfs.DFSClient: Will fetch a 
> new access token and retry, access token was invalid when connecting to 
> /10.2.201.28:11402 : 
> org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got 
> access token error for OP_READ_BLOCK, self=/10.2.201.110:39179, 
> remote=/10.2.201.28:11402, for file 
> /hbase/lgsrv-micloud/phone_number_digest/58c0ed89dd38537a0077807523458404/C/6448605d86fb4aae8f9c773408a1ea6a,
>  for pool BP-1310313570-10.101.10.66-1373527541386 block 
> -190217754078101701_5305685
> 2013-12-05,13:22:26,049 INFO org.apache.hadoop.hdfs.DFSClient: Successfully 
> connected to /10.2.201.28:11402 for block -190217754078101701
> [work@lg-hadoop-srv-st10 regionserver]$ ifconfig
> em1       Link encap:Ethernet  HWaddr B8:CA:3A:F5:BB:61 
>           inet addr:10.2.201.110  Bcast:10.2.201.255  Mask:255.255.255.0



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to