It's more in the vain of
https://issues.apache.org/jira/browse/HBASE-3755 and
https://issues.apache.org/jira/browse/HBASE-3771

Basically 0.90 has a regression regarding the handling of zookeeper
connections that make it that you have to be super careful not to have
more than 30 per machine (each new Configuration is one new ZK
connection). Upping your zookeeper max connection config should get
rid of your issues since you only get it occasionally.

J-D

On Tue, Apr 12, 2011 at 7:59 AM, Venkatesh <[email protected]> wrote:
>
>  I get this occasionally..(not all the time)..Upgrading from 0.20.6 to 0.90.2
> Is this issue same as this JIRA
> https://issues.apache.org/jira/browse/HBASE-3578
>
> I'm using HBaseConfiguration.create() & setting that in job
> thx
> v
>
>
>  2011-04-12 02:13:06,870 ERROR Timer-0 
> org.apache.hadoop.hbase.mapreduce.TableInputFormat - 
> org.apache.hadoop.hbase.ZooKeeperConnectionException: 
> org.apache.hadoop.hbase.ZooKeeperConnectionException: 
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /hbase        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1000)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:303)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:294)
>        at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:156)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:167)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:145)
>        at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:91)
>        at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>        at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>        at 
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:882)
>        at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:448)
>
>
>
>

Reply via email to