Hi, I'm trying to add a file to HDFS, but I get the following error: [code]
t_-220043647) from 127.0.0.1:59007: error: java.io.IOException: File /user/hadoop/gutenberg/gutenberg/20417-8.txt could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /user/hadoop/gutenberg/gutenberg/20417-8.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) 2010-01-21 18:20:31,914 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop,dialout,cdrom,floppy,audio,video,plugdev,lpadmin,admin ip=/127.0.0.1 cmd=create src=/user/hadoop/gutenberg/4300-8.txt dst=null perm=hadoop:supergroup:rw-r--r-- 2010-01-21 18:20:31,931 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310, call addBlock(/user/hadoop/gutenberg/4300-8.txt, DFSClient_2035415000) from 127.0.0.1:59010: error: java.io.IOException: File /user/hadoop/gutenberg/4300-8.txt could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /user/hadoop/gutenberg/4300-8.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) [/code] Here is the configuration files: [code] had...@hawai:~/hadoop-0.20.1_single_node/conf$ cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/tmp/dir/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration> had...@hawai:~/hadoop-0.20.1_single_node/conf$ cat hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> </configuration> [/code] I've also run the command "bin/hadoop namenode -format" but the error persists. 1 - where can I configure the logs of HDFS to DEBUG? 2 - I've already looked at the internet to find the solution for this problem, but I can't solve my problem. What's going on with HDFS? Why the text file could not be replicated to 1 node, if dfs.replication is well configured? 3 - How can I solve this problem? I'm facing this problem for about 5 days, and I don't have any clue. Can somebody help? Regards, -- PSC