Hi, the issue was that my Hbase client library did not fit to the server version. Desperate ;) as I was, I switched on the debug log level and have seen some messages, telling me that some node @ zookeeper would be missing but that wouldn't be an error (?)...so that has looked suspicious to me and I find an info that the protocol client<->server has changed...then I've realized what the problem might be....
Maybe such incompatibilities could be better expressed in the log , don't know. Anyways, all is running fine now :) BR Marco 2014-12-18 20:36 GMT+01:00 Marco <[email protected]>: > Hi Wilm, > > I also think, I'll try it with a hbase standalone install. Also via > the logging of Apache Phoenix, I see that this framework can without > any issues use the region server and it's the same hbase-site.xml.,. > > Nevertheless, thank you very much for your efforts!!! > > Best wishes, > Marco > > 2014-12-18 17:43 GMT+01:00 Wilm Schumacher <[email protected]>: >> Hi, >> >> I just took a look into the hdp 2.2 sandbox, and unfortunately it was a >> waste of time and I went older ;). >> >> At the first boot, without me doing anything in the configs, zookeeper >> throwed errors at startup and got killed (couldn't connect). However, >> ignoring this I started hbase, which hdp recommend doing by hand O_o >> (starting every service one by one). And the shell worked ... at first. >> >> Now I wanted to make it working and used a network bridge to ssh to the >> box (copy jars for test programs etc.). And now hbase services didn't >> want to start, because now it couldn't access the log files anymore >> (missing permissions) *scratch at head*. I gave permissions to the hbase >> user ... didn't work either. HMaster got killed immediately. After using >> a host-only network ... same problems. I didn't looked further into the >> problems, as I wanted to take only a small look at the product. >> >> After reimporting and a second try ... same problems. So, I couldn't >> reproduce your problem because I didn't get to your point of problems :(. >> >> As this is a standard VM thousands of people use the errors are most >> likely in my install of the VM, but it seems to be very easy to make >> mistakes and end up with a not usable install :/. Sry that this didn't >> help. But I think you should use a standard hbase standalone install to >> test hbase for your purposes. >> >> Best wishes and good luck >> >> Wilm >> >> Am 16.12.2014 um 15:19 schrieb Marco: >>> Hi, >>> >>> Hbase is installed correctly and working (hbase shell works fine). >>> >>> But I'm not able to use the Java API to connect to an existing Hbase Table: >>> >>> <<< >>> val conf = HBaseConfiguration.create() >>> >>> conf.clear() >>> >>> conf.set("hbase.zookeeper.quorum", "ip:2181"); >>> conf.set("hbase.zookeeper.property.clientPort", "2181"); >>> conf.set("hbase.zookeeper.dns.nameserver", "ip"); >>> conf.set("hbase.regionserver.port","60020"); >>> conf.set("hbase.master", "ip:60000"); >>> >>> val hTable = new HTable(conf, "truck_events") >>> >>> Actually the coding is Scala but I think it is understandable, what I >>> am trying to achieve. I've tried also to use hbase-site.xml instead of >>> manually configuring it - but the result is the same. >>> >>> As response I got >>> 14/12/16 15:10:05 INFO zookeeper.ZooKeeper: Initiating client >>> connection, connectString=ip:2181 sessionTimeout=30000 >>> watcher=hconnection >>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Opening socket connection >>> to server ip:2181. Will not attempt to authenticate using SASL >>> (unknown error) >>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Socket connection >>> established to ip:2181, initiating session >>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Session establishment >>> complete on server ip:2181, sessionid = 0x14a53583e080010, negotiated >>> timeout = 30000 >>> >>> and then finally after a couple of minutes: (the constructor call of >>> HTable is hanging) >>> >>> [error] (run-main-0) >>> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to >>> find region for truck_events,,99999999999999 after 14 tries. >>> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to >>> find region for truck_events,,99999999999999 after 14 tries. >>> at >>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1092) >>> at >>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:997) >>> at >>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1099) >>> at >>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1001) >>> at >>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:958) >>> at >>> org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251) >>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:155) >>> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:129) >>> at HbaseConnector$.main(HbaseConnector.scala:18) >>> at HbaseConnector.main(HbaseConnector.scala) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:606) >>> [trace] Stack trace suppressed: run last compile:run for the full output. >>> 14/12/16 13:22:15 ERROR zookeeper.ClientCnxn: Event thread exiting due >>> to interruption >>> java.lang.InterruptedException >>> at >>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) >>> at >>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052) >>> at >>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) >>> at >>> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491) >>> 14/12/16 13:22:15 INFO zookeeper.ClientCnxn: EventThread shut down >>> java.lang.RuntimeException: Nonzero exit code: 1 >>> at scala.sys.package$.error(package.scala:27) >>> [trace] Stack trace suppressed: run last compile:run for the full output. >>> [error] (compile:run) Nonzero exit code: 1 >>> [error] Total time: 1106 s, completed Dec 16, 2014 1:22:15 PM >>> >>> >>> In the RegionServer log, I've seen this: >>> >>> 2014-12-16 13:31:34,087 DEBUG [RpcServer.listener,port=60020] >>> ipc.RpcServer: RpcServer.listener,port=60020: connection from >>> 10.97.68.159:41772; # active connections: 1 >>> 2014-12-16 13:33:34,220 DEBUG [RpcServer.reader=1,port=60020] >>> ipc.RpcServer: RpcServer.listener,port=60020: DISCONNECTING client >>> 10.97.68.159:41772 because read count=-1. Number of active >>> connections: 1 >>> 2014-12-16 13:36:26,988 DEBUG [LruStats #0] hfile.LruBlockCache: >>> Total=430.02 KB, free=401.18 MB, max=401.60 MB, blockCount=4, >>> accesses=28, hits=24, hitRatio=85.71%, , cachingAccesses=28, >>> cachingHits=24, cachingHitsRatio=85.71%, evictions=269, evicted=0, >>> evictedPerRun=0.0 >>> 2014-12-16 13:36:34,017 DEBUG [RpcServer.listener,port=60020] >>> ipc.RpcServer: RpcServer.listener,port=60020: connection from >>> 10.97.68.159:42728; # active connections: 1 >>> 2014-12-16 13:38:34,112 DEBUG [RpcServer.reader=2,port=60020] >>> ipc.RpcServer: RpcServer.listener,port=60020: DISCONNECTING client >>> 10.97.68.159:42728 because read count=-1. Number of active >>> connections: 1 >>> 2014-12-16 13:41:26,989 DEBUG [LruStats #0] hfile.LruBlockCache: >>> Total=430.02 KB, free=401.18 MB, max=401.60 MB, blockCount=4, >>> accesses=30, hits=26, hitRatio=86.67%, , cachingAccesses=30, >>> cachingHits=26, cachingHitsRatio=86.67%, evictions=299, evicted=0, >>> evictedPerRun=0.0 >>> >>> So it connects and disconnects with read count -1 . >>> >>> Can anybody help me finding the root cause of this issue ? I've tried >>> to restart Hbase and so on but with no effect. Hive is also working >>> fine, just not my coding :( >>> >>> Thanks a lot, >>> Marco >> > > > > -- > Viele Grüße, > Marco -- Viele Grüße, Marco
