[ 
https://issues.apache.org/jira/browse/HDFS-3333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13263351#comment-13263351
 ] 

liaowenrui commented on HDFS-3333:
----------------------------------

name node logs:

2012-04-27 10:02:07,453 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
addStoredBlock: blockMap updated: 10.18.52.55:50010 is added to 
blk_5358037144179192664_118193{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]} size 0
2012-04-27 10:02:07,454 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 2 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,454 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /user/root/lwr/test31.txt. 
BP-1941047897-10.18.40.154-1335419775245 
blk_-5430945475809539701_118194{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]}
2012-04-27 10:02:07,466 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
addStoredBlock: blockMap updated: 10.18.52.55:50010 is added to 
blk_-5430945475809539701_118194{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]} size 0
2012-04-27 10:02:07,467 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 2 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,467 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /user/root/lwr/test31.txt. 
BP-1941047897-10.18.40.154-1335419775245 
blk_7179050910888663641_118195{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]}
2012-04-27 10:02:07,479 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
addStoredBlock: blockMap updated: 10.18.52.55:50010 is added to 
blk_7179050910888663641_118195{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]} size 0
2012-04-27 10:02:07,480 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 2 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,480 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /user/root/lwr/test31.txt. 
BP-1941047897-10.18.40.154-1335419775245 
blk_5515331497909593936_118196{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]}
2012-04-27 10:02:07,490 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
addStoredBlock: blockMap updated: 10.18.52.55:50010 is added to 
blk_5515331497909593936_118196{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]} size 0
2012-04-27 10:02:07,491 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 2 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,491 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /user/root/lwr/test31.txt. 
BP-1941047897-10.18.40.154-1335419775245 
blk_6362697618477922316_118197{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]}
2012-04-27 10:02:07,503 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
addStoredBlock: blockMap updated: 10.18.52.55:50010 is added to 
blk_6362697618477922316_118197{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]} size 0
2012-04-27 10:02:07,503 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 2 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,503 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /user/root/lwr/test31.txt. 
BP-1941047897-10.18.40.154-1335419775245 
blk_454857163161775943_118198{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.18.52.55:50010|RBW]]}
2012-04-27 10:02:07,591 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not able to 
place enough replicas, still in need of 3 to reach 3
For more information, please enable DEBUG level logging on the 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.
2012-04-27 10:02:07,595 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:l00110880 (auth:SIMPLE) 
cause:java.io.IOException: File /user/root/lwr/test31.txt could only be 
replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) 
running and 3 node(s) are excluded in this operation.
2012-04-27 10:02:07,596 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
10.18.47.134:3284: error: java.io.IOException: File /user/root/lwr/test31.txt 
could only be replicated to 0 nodes instead of minReplication (=1).  There are 
3 datanode(s) running and 3 node(s) are excluded in this operation.
java.io.IOException: File /user/root/lwr/test31.txt could only be replicated to 
0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 
node(s) are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1259)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1916)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:472)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42602)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:428)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
                                                                                
                                                           17405,1       99%

                
> java.io.IOException: File /user/root/lwr/test31.txt could only be replicated 
> to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running 
> and 3 node(s) are excluded in this operation.
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3333
>                 URL: https://issues.apache.org/jira/browse/HDFS-3333
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.23.1, 2.0.0
>         Environment: namenode:1 (IP:10.18.40.154)
> datanode:3 (IP:10.18.40.154,10.18.40.102,10.18.52.55)
> HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop 
> dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 129238446080 (120.36 GB)
> Present Capacity: 51742765056 (48.19 GB)
> DFS Remaining: 49548591104 (46.15 GB)
> DFS Used: 2194173952 (2.04 GB)
> DFS Used%: 4.24%
> Under replicated blocks: 14831
> Blocks with corrupt replicas: 1
> Missing blocks: 100
> -------------------------------------------------
> Datanodes available: 3 (3 total, 0 dead)
> Live datanodes:
> Name: 10.18.40.102:50010 (10.18.40.102)
> Hostname: linux.site
> Decommission Status : Normal
> Configured Capacity: 22765834240 (21.2 GB)
> DFS Used: 634748928 (605.34 MB)
> Non DFS Used: 1762299904 (1.64 GB)
> DFS Remaining: 20368785408 (18.97 GB)
> DFS Used%: 2.79%
> DFS Remaining%: 89.47%
> Last contact: Fri Apr 27 10:35:57 IST 2012
> Name: 10.18.40.154:50010 (HOST-10-18-40-154)
> Hostname: HOST-10-18-40-154
> Decommission Status : Normal
> Configured Capacity: 23259897856 (21.66 GB)
> DFS Used: 812396544 (774.76 MB)
> Non DFS Used: 8297279488 (7.73 GB)
> DFS Remaining: 14150221824 (13.18 GB)
> DFS Used%: 3.49%
> DFS Remaining%: 60.84%
> Last contact: Fri Apr 27 10:35:58 IST 2012
> Name: 10.18.52.55:50010 (10.18.52.55)
> Hostname: HOST-10-18-52-55
> Decommission Status : Normal
> Configured Capacity: 83212713984 (77.5 GB)
> DFS Used: 747028480 (712.42 MB)
> Non DFS Used: 67436101632 (62.8 GB)
> DFS Remaining: 15029583872 (14 GB)
> DFS Used%: 0.9%
> DFS Remaining%: 18.06%
> Last contact: Fri Apr 27 10:35:58 IST 2012
>            Reporter: liaowenrui
>   Original Estimate: 0.2h
>  Remaining Estimate: 0.2h
>
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
> log4j:WARN Please initialize the log4j system properly.
> java.io.IOException: File /user/root/lwr/test31.txt could only be replicated 
> to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running 
> and 3 node(s) are excluded in this operation.
>       at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1259)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1916)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:472)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42602)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:428)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1682)
> i:4284
>       at org.apache.hadoop.ipc.Client.call(Client.java:1159)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:185)
>       at $Proxy9.addBlock(Unknown Source)
>       at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
>       at $Proxy9.addBlock(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:295)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1097)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:973)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
> testcase:
> import java.io.IOException;
> import java.net.URI;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FSDataOutputStream;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.hdfs.DFSConfigKeys;
> import org.apache.hadoop.hdfs.DistributedFileSystem;
> public class Write1 {
>       /**
>        * @param args
>        * @throws Exception 
>        */
>       public static void main(String[] args) throws Exception {
>               //System.out.println("main");
>               String hdfsFile="/user/root/lwr/test31.txt";
>       byte writeBuff[] = new byte [1024 * 1024];
>       int i=0;
>       DistributedFileSystem dfs = new DistributedFileSystem();
>       Configuration conf=new Configuration();
>       //conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 512);
>       //conf.setLong(DFSConfigKeys.DFS_REPLICATION_KEY, 2);
>         // conf.setInt("dfs.replication", 3);
>         conf.setLong("dfs.blocksize", 512);
>       dfs.initialize(URI.create("hdfs://10.18.40.154:9000"), conf);
>       //dfs.delete(new Path(hdfsFile));
>           //appendFile(dfs,hdfsFile,1024 * 1024,true);
>       try
>       {
>       FSDataOutputStream out1=dfs.create(new Path(hdfsFile));
>       
>           for(i=0;i<100000;i++)
>           {
>        out1.write(writeBuff, 0, 512);
>       }
>         out1.hsync();
>         out1.close();
>         /*
>           FSDataOutputStream out=dfs.append(new Path(hdfsFile),4096);
>               out.write(writeBuff, 0, 512 * 1024);
>               out.hsync();
>               out.close();
>               */
>       }catch (IOException e)
>       {          
>          System.out.println("i:" + i);
>          e.printStackTrace();
>       }
>       finally
>       {   
>               
>               System.out.println("i:" + i);
>           System.out.println("end!");
>        
>       }
>     }
>       
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to