i thought that start-all and stop-all weren't supposed to be used on
distributed clusters...  that it was just sugar for testing/learning...

try start-hdfs then start-mapred

(and if you stop them, it's stop-mapred then stop-hdfs)

then again it's ec2 so they might do something weird or special that i'm not
aware of.

-mike

On Tue, Nov 24, 2009 at 1:24 PM, Stephen Watt <[email protected]> wrote:

> Hi Mark
>
> Are you starting the clusters from the contrib/ec2 scripts ? These scripts
> have a special way of bringing up the cluster where they are passing in
> the hostnames of the slaves as they are being assigned from ec2, thus I
> think stop-all and start-all will not work as they both assume the slaves
> are defined in the slaves file. Its been awhile since I looked at this so
> excuse my lack of specifics. I believe there is a script in the /root
> directory of each ec2 image that these values are being passed into that
> does the work of starting the tasktracker/datanode processes on each one
> of these.
>
> Kind regards
> Steve Watt
>
>
>
> From:
> Mark Kerzner <[email protected]>
> To:
> [email protected]
> Date:
> 11/24/2009 03:02 PM
> Subject:
> Hadoop on EC2
>
>
>
> Hi,
>
> I am starting a cluster of Apache Hadoop distributions, like .18 and also
> .19. This all works fine, then I log in. I see that the Hadoop daemons are
> already working. However, when I try
>
> # which hadoop
> /usr/local/hadoop-0.19.0/bin/hadoop
> # jps
> 1355 Jps
> 1167 NameNode
> 1213 JobTracker
> # hadoop fs -ls hdfs://localhost/
> 09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:8020. Already tried 0 time(s).
>
> I do stop-all.sh and then start-all.sh, and it does not help. What am I
> doing wrong?
>
> Thank you,
> Mark
>
>
>

Reply via email to