On 3/7/2012 9:58 PM, Gopal wrote:
On 3/7/2012 9:11 PM, Gopal wrote:

*Linux
*Debian Squeeze

*Hadoop Configuration*

Version: hadoop-*0.20.205*.0
IP's:- 192.168.1.*76 *and 192.168.1.*74*
/etc/host:
master -> 192.168.1.*76
*slave ->**192.168.1.*74

*Confi*guration files on both master & slave server's :-
*cat *master -> master
*cat *slave -> slave

*Hadoop comes up alright , I can list hadoop directories from both master and slave.
=============================
Hbase Configuration:-
version : *hbase-0.92.0
*master
cat regionserver -> slave

slave
cat regionserver -> slave
=============================
Is this setting correct ?

Issue is hbase is unable to connect from the slave machine.

If I enter master & slave on the master's hadoop configuration, I can launch hbase shell on the master and list table's.

After Starting *Hadoop*:-

Master :-
oracle@radha:~$ jps
11441 NameNode
11802 JobTracker
12023 Jps
11710 SecondaryNameNode
11931 TaskTracker
11573 *DataNode*
oracle@radha:~$

SLave :-
oracle@misa:~/myhadoop/hadoop-0.20.205.0/conf$ jps
5766 Jps
5599 *DataNode*
5699 *TaskTracker*
oracle@misa:~/myhadoop/hadoop-0.20.205.0/conf$

After starting hbase
Master
oracle@radha:~/hbase-0.92.0/bin$ jps
11441 NameNode
12460 Jps
11802 JobTracker
12367 HMaster
12296 HQuorumPeer
11710 SecondaryNameNode
11931 TaskTracker
11573 DataNode
oracle@radha:~/hbase-0.92.0/bin$

Slave
oracle@misa:~/myhadoop/hadoop-0.20.205.0/conf$ jps
5914 HRegionServer
5599 DataNode
5833 HQuorumPeer
5699 TaskTracker
5972 Jps
oracle@misa:~/myhadoop/hadoop-0.20.205.0/conf$


Connecting from master :-
==================WORKS GREAT-=====================
oracle@radha:~/hbase-0.92.0/bin$ ./hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.92.0, r1231986, Mon Jan 16 13:16:35 UTC 2012

hbase(main):001:0> list
TABLE
gett1
onemore
thisforme
3 row(s) in 0.6500 seconds

hbase(main):002:0>
==================WORKS GREAT-=====================


Connecting from Slave :-
==================Does not work ==============
Just hangs
==================Does not work ==============

Question: Should"conf/slaves" file in "datanode" should have "master" entry ?
Question : Should region server have only "slave" entry or somethng else.

I have checked the /etc/hosts file and they are good.


Both Zookeeper and region server log from slave are below:-

*Zookeeper Log:*
a70003 with negotiated timeout 180000 for client /192.168.1.74:48713
2012-03-07 21:09:18,920 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x135f00ebea70003, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
        at java.lang.Thread.run(Thread.java:662)

*region server log*
2012-03-07 21:06:35,264 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at localhost,60000,1331172337407 2012-03-07 21:07:35,402 WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:604)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1026) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:878) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
        at $Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183) at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:303) at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:280) at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:332) at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236) at org.apache.hadoop.hbase.regionserver.HRegionServer.getMaster(HRegionServer.java:1629) at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1666) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
        at java.lang.Thread.run(Thread.java:662)
2012-03-07 21:07:38,408 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at localhost,60000,1331172337407


Any help is greatly appreciated.

Thanks

Resolved.

Issue was with the /etc/hosts

Here is the modified Host file:-

127.0.0.1       localhost
#MHG#127.0.0.1  localhost misa
#MHG#127.0.0.1  misa.absoftinc.com misa
#127.0.1.1      misa.absoftinc.com      misa

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
#Master / slave
192.168.1.76 master
192.168.1.74 slave
192.168.1.76 radha
192.168.1.74 misa



Also , I was getting this error :-


      NoServerForRegionException


Killed hmaster and restarted it.
Thanks



One question remains is this :-

If I start hbase just with NameNode on master and No Datanode. It does not seem to work.

IN other words :- master -> NameNode-( *nodatanode*)
hbase does not want to work nice.

It just hangs.

Reply via email to