I have fresh  hadoop 0.20.2 installed on virtualbox 4.0.8 with jdk
1.6.0_26. The problem is when trying to put a file to hdfs, it throws
error `org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File /path/to/file could only be replicated to 0 nodes, instead of 1';
however, there is no problem to create a folder, as the command ls
print the result

Found 1 items
drwxr-xr-x   - user supergroup          0 2011-07-15 11:09 /user/user/test

I also try with flushing firewall (remove all iptables restriction),
but the error message is still thrown out when uploading (hadoop fs
-put /tmp/x test) a file from local fs.

The name node log shows

2011-07-15 10:42:43,491 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
aaa.bbb.ccc.ddd.22:50010 storage DS-929017105-aaa.bbb.ccc.22-50010-13
10697763488
2011-07-15 10:42:43,495 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/aaa.bbb.ccc.22:50010
2011-07-15 10:42:44,169 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
aaa.bbb.ccc.35:50010 storage DS-884574392-aaa.bbb.ccc.35-50010-13
10697764164
2011-07-15 10:42:44,170 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/aaa.bbb.ccc.35:50010
2011-07-15 10:42:44,507 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
aaa.bbb.ccc.ddd.11:50010 storage DS-1537583073-aaa.bbb.ccc.11-50010-1
310697764488
2011-07-15 10:42:44,507 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/aaa.bbb.ccc.11:50010
2011-07-15 10:42:45,796 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
140.127.220.25:50010 storage DS-1500589162-aaa.bbb.ccc.25-50010-1
310697765386
2011-07-15 10:42:45,797 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/aaa.bbb.ccc.25:50010

And all datanodes have similar message as below:

2011-07-15 10:42:46,562 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-07-15 10:42:47,163 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 3 msecs
2011-07-15 10:42:47,187 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2011-07-15 11:19:42,931 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 1 msecs

Command `hadoop fsck /`  displays

Status: HEALTHY
 Total size:    0 B
 Total dirs:    3
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    3
 Average block replication:     0.0
 Corrupt blocks:                0
 Missing replicas:              0
 Number of data-nodes:          4

The setting in conf include:

- Master node:
core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://lab01:9000/</value>
  </property>

hdfs-site.xml
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>

-Slave nodes:
core-site.xml
  <property>
    <name>fs.default.name</name>
    <value>hdfs://lab01:9000/</value>
  </property>

hdfs-site.xml
 <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>

Do I missing any configuration? Or any place that I can check?

Thanks.

Reply via email to