Hi.

As this is currently just test data the date im entering is basically the same 
for all rows with the exception of the last 3 digits of a static number being 
rand(100,255) 

I think timestamps is being set by hbase ( i cant see anythign in the 
RegexHbaseEventSerializer that set the time in the put call). all the machines 
are running ntpd so time shoudl be fairly accurate, though they are vms so they 
will probably be a second or two out.

I havent tried a basic load into hbase without flume yet, ill try that tomorrow.

-Ian 

On Wednesday 26 Mar 2014 09:37:09 Stack wrote:
> On Wed, Mar 26, 2014 at 8:06 AM, Ian Brooks <[email protected]> wrote:
> 
> > Hi,
> >
> > I'm using hbase version 0.96.1.1-hadoop2 and there are 16 regions across 8
> > servers.
> >
> > I get similar results at lower numbers as well, a run if 1000 rows into
> > flume results in 997 entries in hbase.
> >
> >
> Anything particular about these three missing items?  Can you figure what
> they are?
> 
> What about timestamps?  Are you setting them or letting hbase set them?
>  Clocks are relatively good on client and servers?
> 
> 
> 
> > I have tried setting flume to write to a file and that correctly puts all
> > the rows into the file, so flume is recieving all the data correctly and
> > can write it out without any problems.
> >
> 
> So, all 1k items are in this file?
> 
> If 1k only, can you log each entry being inserted?  Can you insert the 1k
> w/o flume in the mix?
> 
> Thanks,
> St.Ack

Reply via email to