Well, maybe I found what I was doing wrong:

I was always using hdfs://localhost, and it works just as well with /
instead

Mark

On Tue, Nov 24, 2009 at 3:01 PM, Mark Kerzner <[email protected]> wrote:

> Hi,
>
> I am starting a cluster of Apache Hadoop distributions, like .18 and also
> .19. This all works fine, then I log in. I see that the Hadoop daemons are
> already working. However, when I try
>
> # which hadoop
> /usr/local/hadoop-0.19.0/bin/hadoop
> # jps
> 1355 Jps
> 1167 NameNode
> 1213 JobTracker
> # hadoop fs -ls hdfs://localhost/
> 09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:8020. Already tried 0 time(s).
>
> I do stop-all.sh and then start-all.sh, and it does not help. What am I
> doing wrong?
>
> Thank you,
> Mark
>
>

Reply via email to