[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-14 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11526.001.patch, HDFS-11526-branch-2.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-14 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.9.0

Test failure are not related. Committed this to trunk and branch-2. Thanks 
[~jojochuang] for the review!

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11526.001.patch, HDFS-11526-branch-2.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Attachment: HDFS-11526-branch-2.patch

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-11526.001.patch, HDFS-11526-branch-2.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Attachment: (was: HDFS-11526-branch-2.patch)

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-11526.001.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Attachment: HDFS-11526-branch-2.patch

Attach a new patch for branch-2.

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-11526.001.patch, HDFS-11526-branch-2.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Attachment: HDFS-11526.001.patch

Attach a patch to fix this. Please help to see if I am printing correctly, 
[~jojochuang]. Thanks.

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-11526.001.patch
>
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11526:
-
Status: Patch Available  (was: Open)

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11526) Fix confusing block recovery message

2017-03-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11526:
---
Labels: supportability  (was: )

> Fix confusing block recovery message
> 
>
> Key: HDFS-11526
> URL: https://issues.apache.org/jira/browse/HDFS-11526
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
>
> The following error message is wrong.
> {code:title=BlockRecoveryWorker#recover}
> } catch (IOException e) {
>   ++errorCount;
>   InterDatanodeProtocol.LOG.warn(
>   "Failed to obtain replica info for block (=" + block
>   + ") from datanode (=" + id + ")", e);
> }
> {code}
> The operation performed in the try block is an attempt to recover the block, 
> not obtain replica info from the datanode.
> This is the error message printed by the above code:
> {noformat}
> 2017-03-01 16:15:35,884 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-1949147302-10.0.0.140-1423905184563:blk_1074852850_1112215) from 
> datanode (=DatanodeInfoWithStorage[10.0.0.53:50010,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1112223, 
> block=blk_1074852850_1112215, replica=FinalizedReplica, 
> blk_1074852850_1112231, FINALIZED
>   getNumBytes() = 12823160
>   getBytesOnDisk()  = 12823160
>   getVisibleLength()= 12823160
>   getVolume()   = /dfs/dn/current
>   getBlockFile()= 
> /dfs/dn/current/BP-1949147305-10.0.0.140-1423905184563/current/finalized/subdir16/subdir243/blk_1074852850
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2318)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2277)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2548)
> at 
> org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
> at 
> org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2983)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:339)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:118)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:374)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org