We solved this partially by converting increment to put and aggregating in preCompact. I guess that if you bulk upload an HFile which has this puts which are a representation of a delta to a column, and merge it in the compaction it could work.
On Monday, February 18, 2013, Andrew Purtell wrote: > > Is there a way to increment counters in HBase via bulk upload? > > I thought about maybe doing this once, as > https://issues.apache.org/jira/browse/HBASE-3936, but we decided to > resolve > it as maybe something to try later if there was ever a compelling need. I > wonder if doing the in memory merge of most recent value with increments > found in HFiles submitted for bulk import would perform any better than > just incrementing with the client API (perhaps with some level of > batching). Seems a complicated undertaking with unclear benefit. Would > still be interesting to try as a experiment someday though. > > On Sat, Feb 16, 2013 at 8:39 AM, Ashish Nigam > <[email protected]<javascript:;> > >wrote: > > > Hi, > > Is there a way to increment counters in HBase via bulk upload? > > At present, I am storing counters in a sequence file in HDFS. Then I use > a > > mapper to read the file and increment counters in HBase via Java API > calls. > > I am assuming that if there is a way to increment counters via bulk > upload, > > it will be more efficient. > > > > Thanks > > Ashish > > > > > > -- > Best regards, > > - Andy > > Problems worthy of attack prove their worth by hitting back. - Piet Hein > (via Tom White) >
