replace
<dependency org="org.apache.gora" name="gora-hbase" rev="0.2.1"
conf="*->default" />
with
<dependency org="org.apache.gora" name="gora-hbase" rev="0.2.1"
conf="*->default"
 <exclude org="org.apache.hbase" name="hbase" rev="0.90.4">
 <include org="org.apache.hbase" name="hbase" rev="0.90.6">
</dependency>

hopefully something like this will work.

hth

On Thu, Feb 21, 2013 at 11:00 AM, kaveh minooie <[email protected]> wrote:

> thanks Lewis.
> I am sorry but i have to bother you with this. could tell me how I can
> have nutch ant fetch the 0.90.6 instead of 0.90.4 because, according to ant
> report, it is being fetch indirectly by gora 0.2.1. I am actually using
> 0.90.6 for my hbase, but I don't how modify ivy.xml file to accomplish that.
>
> thanks,
>
> On 02/21/2013 10:39 AM, Lewis John Mcgibbney wrote:
>
>> http://s.apache.org/WbGsorry for ridiculous size of font
>> hth
>>
>> On Thu, Feb 21, 2013 at 10:31 AM, kaveh minooie <[email protected]> wrote:
>>
>>
>>> Has anyone encounter this error before:
>>>
>>> org.apache.gora.util.****GoraException: org.apache.hadoop.hbase.****
>>> ZooKeeperConnectionException:
>>> HBase is able to connect to ZooKeeper but the connection closes
>>> immediately. This could be a sign that the server has too many
>>> connections
>>> (30 is the default). Consider inspecting your ZK server logs for that
>>> error
>>> and then make sure you are reusing HBaseConfiguration as often as you
>>> can.
>>> See HTable's javadoc for more information.
>>>          at org.apache.gora.store.****DataStoreFactory.****
>>> createDataStore(**
>>> DataStoreFactory.java:167)
>>>          at org.apache.gora.store.****DataStoreFactory.****
>>> createDataStore(**
>>> DataStoreFactory.java:118)
>>>          at org.apache.gora.mapreduce.****GoraOutputFormat.****
>>> getRecordWriter(
>>> **GoraOutputFormat.java:88)
>>>          at org.apache.hadoop.mapred.****MapTask$****
>>> NewDirectOutputCollector.<
>>> **init>(MapTask.java:628)
>>>          at org.apache.hadoop.mapred.****MapTask.runNewMapper(MapTask.**
>>> **
>>> java:753)
>>>          at org.apache.hadoop.mapred.****MapTask.run(MapTask.java:370)
>>>          at org.apache.hadoop.mapred.****Child$4.run(Child.java:255)
>>>          at java.security.****AccessController.doPrivileged(****Native
>>> Method)
>>>          at javax.security.auth.Subject.****doAs(Unknown Source)
>>>          at org.apache.hadoop.security.****UserGroupInformation.doAs(**
>>> UserGroupInformation.java:****1136)
>>>          at org.apache.hadoop.mapred.****Child.main(Child.java:249)
>>> Caused by: org.apache.hadoop.hbase.****ZooKeeperConnectionException:
>>> HBase
>>> is able to connect to ZooKeeper but the connection closes immediately.
>>> This
>>> could be a sign that the server has too many connections (30 is the
>>> default). Consider inspecting your ZK server logs for that error and then
>>> make sure you are reusing HBaseConfiguration as often as you can. See
>>> HTable's javadoc for more information.
>>>          at org.apache.hadoop.hbase.****zookeeper.ZooKeeperWatcher.<**
>>> init>(ZooKeeperWatcher.java:****155)
>>>          at org.apache.hadoop.hbase.****client.HConnectionManager$**
>>> HConnectionImplementation.****getZooKeeperWatcher(**
>>> HConnectionManager.java:1002)
>>>          at org.apache.hadoop.hbase.****client.HConnectionManager$**
>>> HConnectionImplementation.****setupZookeeperTrackers(**
>>> HConnectionManager.java:304)
>>>          at org.apache.hadoop.hbase.****client.HConnectionManager$**
>>> HConnectionImplementation.<****init>(HConnectionManager.java:****295)
>>>          at org.apache.hadoop.hbase.****client.HConnectionManager.**
>>> getConnection(****HConnectionManager.java:157)
>>>          at org.apache.hadoop.hbase.****client.HBaseAdmin.<init>(**
>>> HBaseAdmin.java:90)
>>>          at org.apache.gora.hbase.store.****HBaseStore.initialize(**
>>> HBaseStore.java:108)
>>>          at org.apache.gora.store.****DataStoreFactory.****
>>> initializeDataStore(
>>> **DataStoreFactory.java:102)
>>>          at org.apache.gora.store.****DataStoreFactory.****
>>> createDataStore(**
>>> DataStoreFactory.java:161)
>>>
>>>
>>> what configuration in zookeeper is it talking about?
>>> for the record, I think, this is the error in the zookeeper log file that
>>> it is talking about ( it is the only error in my log file):
>>>
>>>
>>> 2013-02-21 02:57:43,099 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/**
>>> ** <http://0.0.0.0/**>
>>> 0.0.0.0:2181:NIOServerCnxn@349**<http://0.0.0.0/0.0.0.0:2181:**
>>> NIOServerCnxn@349 <http://0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349>>
>>> **] - caught end of stream exception
>>> EndOfStreamException: Unable to read additional data from client
>>> sessionid
>>> 0x13cfc5527be0017, likely client has closed socket
>>>          at org.apache.zookeeper.server.****NIOServerCnxn.doIO(**
>>> NIOServerCnxn.java:220)
>>>          at org.apache.zookeeper.server.****NIOServerCnxnFactory.run(**
>>> NIOServerCnxnFactory.java:208)
>>>          at java.lang.Thread.run(Unknown Source)
>>> 2013-02-21 02:57:43,100 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/**
>>> ** <http://0.0.0.0/**>
>>> 0.0.0.0:2181:NIOServerCnxn@****1001<http://0.0.0.0/0.0.0.0:**
>>> 2181:NIOServerCnxn@1001 <http://0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001>
>>> >]
>>> - Closed socket connection for client /127.0.0.1:42306 which had
>>> sessionid 0x13cfc5527be0017
>>>
>>> I am also concern about this happening on the loop back interface.
>>> doesn't
>>> look right to me.
>>>
>>> oh and btw this is on hadoop cluster with 10 nodes
>>> --
>>> Kaveh Minooie
>>>
>>> www.plutoz.com
>>>
>>>
>>
>>
>>
> --
> Kaveh Minooie
>
> www.plutoz.com
>



-- 
*Lewis*

Reply via email to