Thanks Charles, I definitely realize that HBase might be a wrong fit because
of our current data size, but I'm still interested for the other benefits it
provides.  And I'm hoping if we use HBase it can be the paradigm shift to
keep more data around since we want all that 'record of activity' stuff I
mentioned, which we don't currently have.  I expect that will be a lot of
data and I want to ensure we can continue to grow our userbase without
scaling issues.

Thanks for your input on the latency, seems like it will need to be my focus
during my own research.




Charles Woerner-2 wrote:
> 
> Slightly off topic, but we have similar requirements as you and NDBD is
> working great.  As far as latency goes you can definitely see millisecond
> or
> less response times using the NDB api.  Your throughput requirements
> should
> be a piece of cake as well.  10GB is definitely not "big data" and 1-2 ms
> is
> pretty low latency like others have mentioned, so your use case isn't
> really
> in the HBase "sweet spot".  Not to say that it wouldn't work.
> 
> On Tue, Mar 9, 2010 at 7:45 AM, jaxzin <brian.r.jack...@espn3.com> wrote:
> 
>>
>> Hi all, I've got a question about how everyone is using HBase.  Is anyone
>> using its as online data store to directly back a web service?
>>
>> The text-book example of a weblink HBase table suggests there would be an
>> associated web front-end to display the information in that HBase table
>> (ex.
>> search results page), but I'm having trouble finding evidence that anyone
>> is
>> servicing web traffic backed directly by an HBase instance in practice.
>>
>> I'm evaluating if HBase would be the right tool to provide a few things
>> for
>> a large-scale web service we want to develop at ESPN and I'd really like
>> to
>> get opinions and experience from people who have already been down this
>> path.  No need to reinvent the wheel, right?
>>
>> I can tell you a little about the project goals if it helps give you an
>> idea
>> of what I'm trying to design for:
>>
>> 1) Highly available (It would be a central service and an outage would
>> take
>> down everything)
>> 2) Low latency (1-2 ms, less is better, more isn't acceptable)
>> 3) High throughput (5-10k req/sec at worse case peak)
>> 4) Unstable traffic (ex. Sunday afternoons during football season)
>> 5) Small data...for now (< 10 GB of total data currently, but HBase could
>> allow us to design differently and store more online)
>>
>> The reason I'm looking at HBase is that we've solved many of our scaling
>> issues with the same basic concepts of HBase (sharding, flattening data
>> to
>> fit in one row, throw away ACID, etc) but with home-grown software.  I'd
>> like to adopt an active open-source project if it makes sense.
>>
>> Alternatives I'm also looking at: RDBMS fronted with Websphere eXtreme
>> Scale, RDBMS fronted with Hibernate/ehcache, or (the option I understand
>> the
>> least right now) memcached.
>>
>> Thanks,
>> Brian
>> --
>> View this message in context:
>> http://old.nabble.com/Use-cases-of-HBase-tp27837470p27837470.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> ---
> Thanks,
> 
> Charles Woerner
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Use-cases-of-HBase-tp27837470p27840557.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to