Have you looked at the log files?

On 9/2/07 6:14 AM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:

> Hi,
> I've tried setting up hadoop on a single computer, and I'm
> experiencing a problem with the datanode. when i run the start-all.sh
> script it seems to run smoothly, including setting up the datanode.
> The problem occurs when I try to use the hdfs for example running
> "bin/hadoop dfs -put <localsrc> <dst>".
> It gives me the following error:
> 
> put: java.io.IOException: Failed to create file
> /user/chenfren/mytest/.slaves.crc on client 127.0.0.1 because there
> were not enough datanodes available. Found 0 datanodes but
> MIN_REPLICATION for the cluster is configured to be 1.
> 
> I'm not sure if the "/user/chenfren/mytest/" refers to the hdfs or
> not. If not then "/user/chenfren" doesn't exist, and I don't have
> write permissions to /usr/ anyway. So if this is the case, how do I
> change this dir?
> This is the hadoop-site.xml I use:
> 
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> 
> <!-- Put site-specific property overrides in this file. -->
> 
> <configuration>
> 
> <property>
>    <name>hadoop.tmp.dir</name>
>    <value>/tmp/hadoop-${user.name}</value>
> </property>
> <property>
>    <name>fs.default.name</name>
>    <value>localhost:54310</value>
> </property>
> <property>
>    <name>mapred.job.tracker</name>
>    <value>localhost:54311</value>
> </property><property>
>    <name>dfs.replication</name>
>    <value>8</value>
> </property>
> <property>
>    <name>mapred.child.java.opts</name>
>    <value>-Xmx512m</value>
> </property>
> 
> </configuration>
> 
> Can anyone advise?
> 

Reply via email to