Hi

I had faced a similar issue on Ubuntu and Hadoop 0.20 and modified the
start-all script to introduce a sleep time :

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hadoop-config.sh

# start dfs daemons
"$bin"/start-dfs.sh --config $HADOOP_CONF_DIR
*echo 'sleeping'
sleep 60
echo 'awake'*
# start mapred daemons
"$bin"/start-mapred.sh --config $HADOOP_CONF_DIR


This seems to work. Please see if this works for you.
Thanks and Regards,
Sonal


On Thu, Feb 11, 2010 at 3:56 AM, E. Sammer <[email protected]> wrote:

> On 2/10/10 5:19 PM, Nick Klosterman wrote:
>
>> @E.Sammer, no I don't *think* that it is part of another cluster. The
>> tutorial is for a single node cluster just as a initial set up to see if
>> you can get things up and running. I have reformatted the namenode
>> several times in my effort to get hadoop to work.
>>
>
> What I mean is that the data node, at some point, connected to your name
> node. If you reformat the name node, the data node must be wiped clean; it's
> effectively trying to join a name node that no longer exists.
>
>
> --
> Eric Sammer
> [email protected]
> http://esammer.blogspot.com
>

Reply via email to