Your configuration for task tracker, job tracker might be using external hostnames. Essentially any hostnames in configuration files should resolve to internal ips.

Raghu.

Genady wrote:
Hi,

We're using Hadoop 0.18.2/Hbase 0.18.1 four-nodes cluster on CentOS Linux,
in /etc/hosts the following mapping were added to make hadoop to use
internal IPs (fast Ethernet 1GB card):

10.1.0.56       master.hostname  master

10.1.0.55       slave1.hostname slave1

10.1.0.53       slave3.hostname slave3

10.1.0.50       slave2.hostname slave2

If I do local copy to hadoop dfs it works perfectly, hadoop is using
internal IP only to copy data, but when Map-Reduce job starts it's clear(
from monitoring network cards data) that hadoop is using external IP in
addition to internal with pretty big rates ~5Mb/s in each node, is it
something I should add in hadoop configuration to force hadoop to use only
internal(LAN) IP?

Genady Gilin



Reply via email to