How big will the records grow? 50ms should be enough to cover 95th percentile if not more of all requests, at least going by our experience at stumbleupon (servings that come out of cache will be well under 50ms).
Its all random lookups? Any affinity between requests; i.e. will the caching layer in hbase help? Have you tried it? Put up a little cluster and redirect some subset of your traffic and see what kinda rates you can sustain per server. St.Ack On Wed, Oct 13, 2010 at 10:19 AM, Andre Reiter <[email protected]> wrote: > hi every body > > i'm evaluating hbase for a new platform. > the application is a web based application, and the challange is to handle > requests up to 1.000.000 times per second. this can actually be solved by > load ballanced webservers. > The problem now is to persist user data at real time. Let's say every > request appends some data to the user record, i.e. 100 bytes are appended. > The record would have an identifier placed in the cookie. > > The problem now is to look up the user record for every request at runtime. > Is it possible to get the user record with the information about all former > requests in a short time i.e. 50 ms ??? > > i hove no experience with hadoop/hbase yet > > every help would be very appreciated > > thanks in advance > andre > > >
