try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
the cluster and then try again.

Thanks,
Praveenesh

On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail <[email protected]>wrote:

> we are using hadoop on virtual box. when it is a single node then it works
> fine for big dataset larger than the default block size. but in case of
> multinode cluster (2 nodes) we are facing some problems.
> Like when the input dataset is smaller than the default block size(64 MB)
> then it works fine. but when the input dataset is larger than the default
> block size then it shows ‘too much fetch failure’ in reduce state.
> here is the output link
> http://paste.ubuntu.com/707517/
>
> From the above comments , there are many users who faced this problem.
> different users suggested to modify the /etc/hosts file in different manner
> to fix the problem. but there is no ultimate solution.we need the actual
> solution thats why we are writing here.
>
> this is our /etc/hosts file
> 192.168.60.147 humayun # Added by NetworkManager
> 127.0.0.1 localhost.localdomain localhost
> ::1 humayun localhost6.localdomain6 localhost6
> 127.0.1.1 humayun
>
> # The following lines are desirable for IPv6 capable hosts
> ::1 localhost ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> ff02::3 ip6-allhosts
>
> 192.168.60.1 master
> 192.168.60.2 slave
>

Reply via email to