Re: Strange HBase failure

2015-01-12 Thread Esteban Gutierrez
Hi Serega, Do you have enough resources allocated for each VM? Just some swapping on the VMs or the host can make things unstable. Also from the number of services on each VM sounds like your host should have at least 12GB of free RAM just for running things smoothly otherwise you might want to

Re: Strange HBase failure

2015-01-12 Thread Serega Sheypak
Ok, thanks, we'll check it. 2015-01-12 11:28 GMT+03:00 Esteban Gutierrez este...@cloudera.com: Hi Serega, Do you have enough resources allocated for each VM? Just some swapping on the VMs or the host can make things unstable. Also from the number of services on each VM sounds like your host

Re: Question on upper bound of column qualifiers for a Row HBASE-0.98

2015-01-12 Thread Jean-Marc Spaggiari
Moving to the user mailing list, dev in BCC. Hi Gaby, There is none fixed limitation for the max number of column qualifiers (CQ). But you have to keep in mind that HBase will not split within a row. An entire row will always stay in a single region. Therefore, if you row can become very big

Re: Design a datastore maintaining historical view of users.

2015-01-12 Thread Wilm Schumacher
Hi, I'm doing something comparable right now, but not with such a HUGE database O_o. 10 Mio results for such a query? This would mean that you have 100 Mio - 1 Billion customers ?!?! However: in my opinion with such a huge database HBase is a good fit. However, your data model should be changed

Design a datastore maintaining historical view of users.

2015-01-12 Thread Chen Wang
Hey Guys, I am seeking advice on design a system that maintains a historical view of a user's activities in past one year. Each user can have different activities: email_open, email_click, item_view, add_to_cart, purchase etc. The query I would like to do is, for example, Find all customers who

HBase with opentsdb creates huge .tmp file runs out of hdfs space

2015-01-12 Thread sathyafmt
CDH5.1.2 (hbase 0.98.1) (running on a vm - vmware esx) We use opentsdb(2.1.0RC) with hbase after ingesting 200-300MB of data, hbase tries to compact the table, and ends up creating a .tmp file which grows to fill up the entire hdfs space.. and dies eventually. I tried to remove this .tmp file

Re: HBase with opentsdb creates huge .tmp file runs out of hdfs space

2015-01-12 Thread Esteban Gutierrez
Hello sathya, Those files under .tmp are created as part of the normal operations from HBase since it needs to compact the existing store files into into a new larger file. From your description seems that your VM doesn't have enough space for HDFS. Have you tried to increase the space allocated