On Mon, Aug 24, 2009 at 8:55 PM, Matt Massie<[email protected]> wrote:
> Jeff-
>
> If you look in /etc/hosts, you see the "localhost" is 127.0.0.1 (and if you
> use IPv6, ::1).  This address is strictly loopback and can only be used for
> inter-process communication on a single machine.
>
> See: http://en.wikipedia.org/wiki/Localhost
>
> -Matt
>
>
> On Mon, Aug 24, 2009 at 5:47 PM, zhang jianfeng <[email protected]> wrote:
>
>> Hi all,
>>
>>
>>
>> I have two computers, and in the hadoop-site.xml, I define the
>> fs.default.name as localhost:9000, then I cannot access the cluster with
>> Java API from another machine
>>
>> But if I change it to its real IP  192.168.1.103:9000, then I can access
>> the
>> cluster with Java API from another machine.
>>
>> It’s so strange, are they any different ?
>>
>>
>>
>> Thank you.
>>
>> Jeff zhang
>>
>

You have to watch out for this fact when configuring multinode hadoop.
For example if your configuration uses
mapred.job.tracker=localhost:50030 you can run a job tracker on a
separate node but the other hosts in the cluster will not be able to
find it because they will look for it on localhost instead of where it
really is. Same applies for secondary namenode. It will run happily
repeating the a log like "not able to find namenode on
localhost:50030" hopefully you notice this before you need the
snapshot.

Reply via email to