+adding subject
On Thu, Apr 17, 2014 at 5:47 PM, Tarang Dawer <[email protected]>wrote: > Hi All > > I am using HBase - version 0.96.1-hadoop1 with Hadoop version 1.1.1. > > I am writing to HBase in batches of 100 and the flow is such that if the > table does not exists , it gets created. > Following are the snippets of the code > > conf = HBaseConfiguration.create(); > > conf.set("hbase.zookeeper.quorum", "192.168.145.144"); > conf.setInt("hbase.zookeeper.property.clientPort", > 2181); > conf.set("hbase.defaults.for.version.skip", "true"); > > // Create the connection for HBase > hConnection = HConnectionManager.createConnection(conf); > > HTableInterface hTableInterface = hConnection.getTable(tableName); > > try { > hTableInterface.put(toPersist); > } catch(IOException ioe) > { > //THROW EXCEPTION > } > > It works fine, however, if the table gets deleted manually from the shell > and my program is in process of writing the data to hbase(via hbase native > api), i get an exception > > > > > > > > *Caused by: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed > 100 actions: ns_1:oraclecompanydata_16177: 100 times, at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171) > at > org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:882) > at > org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:940) > at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1197) at > org.apache.hadoop.hbase.client.HTable.put(HTable.java:880)* > > Could anybody please tell me why am i getting this exception ? > seems to me that the hconnection object is not in - sync with hbase > server where i get the htableInterface instance for the table, but on the > persist call i get the exception. > am i missing some configuration property which will solve the problem or > is it something else ? >
