Hi,
I'm trying to set up hadoop on a single computer. When I run the
start-all.sh script it goes smoothly (including the datanode set up) but
when I try to "bin/hadoop dfs -put" it gives me the following error:

put: java.io.IOException: Failed to create file
/user//mytest/.slaves.crc on client 127.0.0.1 because there were not
enough datanodes available. Found 0 datanodes but MIN_REPLICATION for
the cluster is configured to be 1.

Is /usr// a folder in the hdfs? If not it doesn't exist and anyway I
don't have write permissions in /usr. If that is the case how do change
the configuration to use another location?
Can anyone advise?

here is my hadoop-site.xml:








hadoop.tmp.dir
/tmp/hadoop-${user.name}


fs.default.name
localhost:54310


mapred.job.tracker
localhost:54311


dfs.replication
8


mapred.child.java.opts
-Xmx512m





Click to compare refinance loan offers from national lenders. Apply fast
& easy online.
<http://tagline.bidsystem.com/fc/Ioyw36bjbprbIHQlGNVUn3qEvEvsNCaJrbad9QD
DAfo1JQ1UGH7t2p/> 



<P><font face="Arial, Helvetica, sans-serif" size="2" 
style="font-size:13.5px">_______________________________________________________________<BR><font
 face="Arial, Helvetica, sans-serif" size="2" style="font-size:13.5px">ICQ - 
You get the message, anywhere!<br>Get it @ <a href="http://www.icq.com"; 
target=new>http://www.icq.com</a></font><br><br>&nbsp;</font></font>

Reply via email to