Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hope someone can help...
I have a 3 Node Setup, and set dfs.replication and dfs.replication.max to 1 (in the file hdfs-site.xml). That works great when putting a file to the hadoop filesystem (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso) when I try that with fuse_dfs I get the following error message from the fuse_dfs_wrapper.sh script LOOKUP /temp/test.test unique: 21, error: -2 (No such file or directory), outsize: 16 unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58 WARN: hdfs does not truly support O_CREATE && O_EXCL Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to create file /temp/test.test on client 10.8.0.1. Requested replication 3 exceeds maximum 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377) ... ... ... ...same messages in the namenode-log 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile: failed to create file /temp/test.test on client 10.8.0.1. Requested replication 3 exceeds maximum 1 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x, DFSClient_814881830$ Requested replication 3 exceeds maximum 1 java.io.IOException: failed to create file /temp/test.test on client 10.8.0.1. Requested replication 3 exceeds maximum 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1074) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:977) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:377) ... ... ...hope someone can help me solving that problem, best regards: Klaus