We need some records from database every time our application gets a request. The data does not change frequently. That is why we are reading it from HBase on the application startup and caching it in memory so that we don't have to go to HBase every single time. These are relatively smaller tables and can be held in memory very easily.
Important point to note here is that we would have done that even if we were using RDBMS. As there is no point in going to DB/HBase if you have relatively stable data that needs to be read from smaller tables every single time. We are on 0.20 but not the release candidate version. We built the trunk on 23rd of July ourselves. If you have relatively larger data that needs to be read frequently, I would recommend using memcached. Regards, Vaibhav On Wed, Aug 12, 2009 at 7:01 AM, bharath vissapragada < [email protected]> wrote: > Vaibhav > > By caching do you mean storing storing all the rows in a HashMap in the > memory , so that u can access that map repeatedly instead of Disk IOs ? > > Thanks > > On Wed, Aug 12, 2009 at 4:53 AM, Vaibhav Puranik <[email protected]> > wrote: > > > Amandeep, > > > > We are caching Hbase results in memory (in a HashMap). > > > > Regards, > > Vaibhav > > > > On Tue, Aug 11, 2009 at 12:56 PM, Amandeep Khurana <[email protected]> > > wrote: > > > > > Vaibhav, > > > > > > What kind of caching are you doing over hbase and how? > > > > > > -Amandeep > > > > > > > > > Amandeep Khurana > > > Computer Science Graduate Student > > > University of California, Santa Cruz > > > > > > > > > On Tue, Aug 11, 2009 at 10:48 AM, Vaibhav Puranik <[email protected] > > > >wrote: > > > > > > > We are using HBase 0.20 (Trunk version at 23rd July evening) in > > > production > > > > environment at GumGum. > > > > > > > > Our experience is very good. Initially I mistakenly forgot to add > > caching > > > > (even though we had planned for it) and every request was fetching > two > > > rows > > > > from Hbase and inserting one row in HBase. > > > > In spite of that our request processing time was less than 300 ms. > > > > > > > > We are not getting huge amounts of requests - we approximately get > > 25,000 > > > > to > > > > 30,000 requests to our web app backed by HBase every day. > > > > > > > > We have a 4 node cluster running on EC2 (Large instances) and so far > we > > > > haven't faced any production problem. > > > > (Hope it works out that way all the time!) > > > > > > > > Regards, > > > > Vaibhav Puranik, > > > > GumGum > > > > > > > > > > > > > > > > > > > > On Tue, Aug 11, 2009 at 10:29 AM, Fabio Kaminski < > > > > [email protected] > > > > > wrote: > > > > > > > > > is there anyone with experience in hbase 0.20 realtime application, > > > > > preferably in production environment? > > > > > > > > > > in thinking in throw away all my legacy knowledge about what i > think > > > > about > > > > > systems.. cause i think this is the hadoop(and hbase) are the next > > big > > > > > thing > > > > > in tecnology.. i really buy this concept and im glady that i found > it > > > > right > > > > > in it's inception. > > > > > > > > > > Im preparing to work with hadoop and hbase for realtime > environment, > > > and > > > > i > > > > > could see that the hbase engineers are preparing hbase for realtime > > > > > applications, like rdbms standards does, but in a new and > promissing > > > > > environment. this is undoubtly a paradigm shift! > > > > > > > > > > anyone with realtime application runing in such environment? could > > you > > > > > share > > > > > some of you experience with it? > > > > > > > > > > Thanks ! > > > > > > > > > > Fabio Kaminski > > > > > > > > > > > > > > >
