Good day,

I am running the hadoop common library (version 3.1.0) in cloudera docker/vm, 
while copying the local file to hdfs file system. I am encounter this 
stacktrace below :


        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)
2018-11-09 01:46:27,573 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: /datahub/staging/temp_Data_Hub_1_99.txt. 
BP-1388946040-10.0.0.8-1508802350597 blk_1073743803_2981{blockUCState=UNDER_
CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-cbee6448-2551-4251-b79b-2980ec42eb6d:NORMAL:172.17.0.2:50010|RBW]]}
2018-11-09 01:46:30,645 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 233 Total time for transactions(ms): 24 Number of 
transactions batched in Syncs: 22 Number of syncs: 211 SyncTimes(ms): 106
2018-11-09 01:46:30,650 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/spark/applicationHistory/.88b38315-c4e5-4a4d-a0fa-f2ccf862b309 is closed 
by DFSClient_NONMAPREDUCE_1462663536_1
2018-11-09 01:46:30,688 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and 
org.apache.hadoop.net.NetworkTopology
2018-11-09 01:46:30,688 WARN 
org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
replicas: expected size is 1 but only 0 storage types can be selected 
(replication=1, selected=[], unavailable=[DISK], removed=[DISK], 
policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
replicationFallbacks=[ARCHIVE]})
2018-11-09 01:46:30,689 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
place enough replicas, still in need of 1 to reach 1 
(unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
newBlock=true) All required storage types are unavailable:  
unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2018-11-09 01:46:30,689 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:ronnie (auth:SIMPLE) cause:java.io.IOException: 
File /datahub/staging/temp_Data_Hub_1_99.txt could only be replicated to 0 
nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 
node(s) are excluded in this operation.
2018-11-09 01:46:30,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
10.60.50.201:48616 Call#4 Retry#0
java.io.IOException: File /datahub/staging/temp_Data_Hub_1_99.txt could only be 
replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) 
running and 1 node(s) are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3440)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:686)
        at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:217)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2222)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)




I have checked my data node and name node is running good in cloudera machine, 
do you have any idea on this ?


Thanks,

Ronnie

Reply via email to