hi Stack,

thanks for your reply


Am 10/13/10 9:52 PM schrieb "Stack" unter <[email protected]>:

> How big will the records grow?
assuming 30.000 requests per second, a request appends 100 bytes to the user
record, one record should not be bigger than 10 kBytes, old "requests" can
actually be dropped from the record to keep it small

> 
> 50ms should be enough to cover 95th percentile if not more of all
> requests, at least going by our experience at stumbleupon (servings
> that come out of cache will be well under 50ms).
> 
> Its all random lookups?  Any affinity between requests; i.e. will the
> caching layer in hbase help?

caching is for sure an option. if a user makes requests to our webservers,
its very likely he makes more than only one

> 
> Have you tried it?  Put up a little cluster and redirect some subset
> of your traffic and see what kinda rates you can sustain per server.

you are right, the best way is actually to make test runs... :-)
thanks again and best regards
andre


> 
> St.Ack
> 
> On Wed, Oct 13, 2010 at 10:19 AM, Andre Reiter <[email protected]> wrote:
>> hi every body
>> 
>> i'm evaluating hbase for a new platform.
>> the application is a web based application, and the challange is to handle
>> requests up to 1.000.000 times per second. this can actually be solved by
>> load ballanced webservers.
>> The problem now is to persist user data at real time. Let's say every
>> request appends some data to the user record, i.e. 100 bytes are appended.
>> The record would have an identifier placed in the cookie.
>> 
>> The problem now is to look up the user record for every request at runtime.
>> Is it possible to get the user record with the information about all former
>> requests in a short time i.e. 50 ms ???
>> 
>> i hove no experience with hadoop/hbase yet
>> 
>> every help would be very appreciated
>> 
>> thanks in advance
>> andre
>> 
>> 
>> 


Reply via email to