Hi,

I'm using hbase version 0.96.1.1-hadoop2 and there are 16 regions across 8 
servers.

I get similar results at lower numbers as well, a run if 1000 rows into flume 
results in 997 entries in hbase.

I have tried setting flume to write to a file and that correctly puts all the 
rows into the file, so flume is recieving all the data correctly and can write 
it out without any problems.

-Ian

On Wednesday 26 Mar 2014 07:56:20 Ted Yu wrote:
> What HBase version are you using ?
> 
> How many regions do you have for the underlying table ?
> 
> If you lower the total number of rows, is this issue occurring ?
> 
> Thanks
> 
> 
> On Wed, Mar 26, 2014 at 6:52 AM, Ian Brooks <[email protected]> wrote:
> 
> > Hi,
> >
> > I have a setup where data is fed into hbase using flume.  When performing
> > inserts in blocks of 1 million, I have noticed that there is constantly
> > less than 1 million being inserted into the database. usually around 33k
> > rows short.
> >
> > I'm using the flume RegexHbase sink and neither the flume logs or the
> > hbase logs show any errors.
> >
> > Any idea how best to track down how the rows are going missing?
> >
> > --
> > -Ian Brooks
> >
> >
-- 
-Ian Brooks
Senior server administrator - Sensewhere

Reply via email to