Hi,

I think I have resolved this issue. earlier I was not able to ping any of
the three hostnames(my cluster has three nodes) from the node where I was
running MR job.

I added nameserver ip in /etc/resolv.conf. Then added an entry for
honeywel-4a7632 in /etc/hosts. Now I could ping all the nodes.

But when I run the job, I get another error. Please find the job error
details.

Does this mean, region server is down? How do i check this? Any
suggestions? Thanks

12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/usr/local/hadoop-1.0.2/libexec/../lib/native/Linux-amd64-64
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:os.version=3.0.0-15-generic
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client environment:user.name
=hadoop
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/hadoop
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/usr/local/hadoop-1.0.2/bin
12/07/04 12:19:19 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/07/04 12:19:19 INFO zookeeper.ClientCnxn: Opening socket connection to
server /127.0.0.1:2181
12/07/04 12:19:19 INFO client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section 'Client'
could not be found. If you are not using SASL, you may ignore this. On the
other hand, if you expected SASL to work, please fix your JAAS
configuration.
12/07/04 12:19:19 INFO zookeeper.ClientCnxn: Socket connection established
to localhost/127.0.0.1:2181, initiating session
12/07/04 12:19:19 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is [email protected]
12/07/04 12:19:19 INFO zookeeper.ClientCnxn: Session establishment complete
on server localhost/127.0.0.1:2181, sessionid = 0x2384e358dc30036,
negotiated timeout = 180000
12/07/04 12:19:19 INFO ipc.HBaseRPC: Server at honeywel-4a7632/
127.0.0.1:60020 could not be reached after 1 tries, giving up.
12/07/04 12:19:20 INFO ipc.HBaseRPC: Server at honeywel-4a7632/
127.0.0.1:60020 could not be reached after 1 tries, giving up.
12/07/04 12:19:21 INFO ipc.HBaseRPC: Server at honeywel-4a7632/
127.0.0.1:60020 could not be reached after 1 tries, giving up.

I

On Wed, Jul 4, 2012 at 11:50 AM, AnandaVelMurugan Chandra Mohan <
[email protected]> wrote:

> Hi,
>
> I have a 3 node HBase cluster up and running. I could list and scan tables
> in HBase shell. I am trying to run HBase map-reduce job to load bulk data
> from TSV file. It fails with
>
> 12/07/04 11:42:11 INFO mapred.JobClient: Task Id :
> attempt_201207031124_0022_m_000002_0, Status : FAILED
> java.lang.RuntimeException: java.net.UnknownHostException: unknown host:
> honeywel-4a7632
>
> I had same issue when I ran HBase client API code from my laptop. I added
> this hostname in my hosts file. Then I could run client code and retrieve
> data.
>
> Still importtsv map reduce job alone fails. I added an entry for this
> hostname in /etc/hosts file. I even tried removing all host names from my
> Hbase and Hadoop cluster configuration files. Then I tried hadoop dfsadmin
> -refreshNodes.
>
> Any idea, why this map-reduce job alone fails? Please let me know if you
> seen this same error.
> --
> Regards,
> Anand
>



-- 
Regards,
Anand

Reply via email to