Hi,
I still get the error.
this is the way I run it:
<10:51> abel-35 42% bin/start-all.sh
starting namenode, logging to
/vol/scratch/chenf/hadoop-0.13.1/bin/../logs/hadoop-chenfren-namenode-ab
el-35.out
abel-35: starting datanode, logging to
/vol/scratch/chenf/hadoop-0.13.1/bin/../logs/hadoop-chenfren-datanode-ab
el-35.out
abel-35: starting secondarynamenode, logging to
/vol/scratch/chenf/hadoop-0.13.1/bin/../logs/hadoop-chenfren-secondaryna
menode-abel-35.out
starting jobtracker, logging to
/vol/scratch/chenf/hadoop-0.13.1/bin/../logs/hadoop-chenfren-jobtracker-
abel-35.out
abel-35: starting tasktracker, logging to
/vol/scratch/chenf/hadoop-0.13.1/bin/../logs/hadoop-chenfren-tasktracker
-abel-35.out
<10:51> abel-35 43% bin/hadoop dfs -put conf test1
put: java.io.IOException: Failed to create file
/user/chenfren/test1/.commons-logging.properties.crc on client
132.67.104.218 because there were not enough datanodes available. Found
0 datanodes but MIN_REPLICATION for the cluster is configured to be 1.
<10:51> abel-35 44% bin/stop-all.sh
stopping jobtracker
abel-35: stopping tasktracker
stopping namenode
abel-35: stopping datanode
abel-35: stopping secondarynamenode

I also attached the namenode and datanode logs, and hadoop-site.xml.
both conf/masters and conf/slaves contain the single line "abel-35" (the
comp. name)

thanks for the help.


<-----Original Message-----> 
>From: C G
>Sent: 9/4/2007 10:40:26 PM
>To: [email protected]
>Subject: Re: problem getting started with hadoop
>
>1. I would suggest changing /tmp/hadoop-${user.name} to something
concrete like: 
>
>/tmp/hadoop 
>
>otherwise, make sure that user.name is defined. 
>
>2. You are trying to run single node, but you have dfs.replication set
to 8. It
>should be 1. 
>
>3. Does your machine respect localhost ? Can you ping localhost? If
not, either
>fix your 
>/etc/hosts file or use the actual hostname of the machine. 
>
>4. Do you have only the single machine listed in both the conf/masters
and
>conf/slaves file? 
>
>When I did my first tests using single node I ran into the same sorts
of
>problems you had. My issue turned out to be host name resolution
confusion. I
>made changes to the way my system was configured (/etc/hosts, etc.) so
that the
>various APIs which resolve hostnames and IP addresses could all agree.
With that
>complete things worked great. Note that if you're renting time on a
hosted
>server someplace you are almost guarenteed to have to spend time
sorting out
>whatever OS configuration they happened so stick on the machine. 
>
>Hope this helps... 
>Chris 


Click here for fast home refi, good credit or not.
<http://tagline.bidsystem.com/fc/Ioyw36blgMGLOhJ1I9IbzhO7XN8eo4zYPzGoZSK
ebkD8kQpOWCK03X/> 



<P><font face="Arial, Helvetica, sans-serif" size="2" 
style="font-size:13.5px">_______________________________________________________________<BR><font
 face="Arial, Helvetica, sans-serif" size="2" style="font-size:13.5px">ICQ - 
You get the message, anywhere!<br>Get it @ <a href="http://www.icq.com"; 
target=new>http://www.icq.com</a></font><br><br>&nbsp;</font></font>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
</property>
<property>
  <name>fs.default.name</name>
  <value>abel-35:50310</value>
</property>
<property>
  <name>mapred.job.tracker</name>
  <value>abel-35:50311</value>
</property>
<property> 
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx512m</value>
</property>

</configuration>

Reply via email to