The specific discussion here was a transaction engine doing snapshot isolation 
using the HBase timestamps, but still be close to wall clock time as much as 
possible.
In that scenario, with ms resolution you can only do 1000 transactions/sec, and 
so you need to turn the timestamp into something that is not wall clock time as 
HBase understands it (and hence TTL, etc, will no longer work, as well as any 
other tools you've written that use the HBase timestamp).
1m transactions/sec are good enough (for now, I envision in a few years we'll 
be sitting here wondering how we could ever think that 1m transaction/sec are 
sufficient) :)

-- Lars



________________________________
 From: Konstantin Boudnik <[email protected]>
To: [email protected]; lars hofhansl <[email protected]> 
Sent: Friday, May 23, 2014 5:58 PM
Subject: Re: Timestamp resolution
 

What's the purpose of nanos accuracy in the TS? I am trying to think of one,
but I don't know much about real production use cases.

Cos

P.S. Are you saying that a real concern is how usable Hbase will be in
nearly 300 years from now? ;) Or I misread you? 


On Fri, May 23, 2014 at 05:27PM, lars hofhansl wrote:
> We have discussed this in the past. It just came up again during an internal 
> discussion.
> Currently we simply store a Java timestamp (millisec since epoch), i.e. we 
> have ms resolution.
> 
> We do have 8 bytes for the TS, though. Not enough to store nanosecs (that
> would only cover 2^63/10^9/3600/24/365.24 = 292.279 years), but enough for
> microseconds (292279 years).
> Should we just store he TS is microseconds? We could do that right now (and
> just keep the ms resolution for now - i.e. the us part would always be 0 for
> now).
> Existing data must be in ms of course, so we'd grandfather that in, but new
> tables could store by default in us.
> 
> We'd need to make this configurable both the column family level and client
> level, so clients could still opt to see data in ms.
> 
> Comments? Too much to bite off?
> 
> -- Lars
> 

Reply via email to