Re: Limit number of columns in column family

2013-09-19 Thread M. BagherEsmaeily
any cell in the same row. Sorry because of my poor language! On Thu, Sep 19, 2013 at 9:28 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi MBE, When you are saying cells with least timestamp being removed you mean versions of the same cell? Or any cell in the same row/cf? JM

Re: Limit number of columns in column family

2013-09-19 Thread Jean-Marc Spaggiari
Don't worry for the language ;) I don't think there is any mecanism today to limit the number of columns into a column family. There might be multiple options but they will all have some drawback. On option is to have a daily mapreduce job looking at each row and doing the cleanup. This can

Re: Limit number of columns in column family

2013-09-19 Thread M. BagherEsmaeily
Thanks, I think these limiting does not optimize for my millions of records. And it is better to change my design.

Re: Running HBase on Yarn … HoYa ?

2013-09-19 Thread Steve Loughran
On 18 September 2013 21:43, Jay Vyas jayunit...@gmail.com wrote: How Are vendor specific versions of hbase running on yarn? Are they using Hoya? I don't who else is playing with it right now, but all it takes is a .tar or .gz file -or path to HBASE_HOME, and execs hbase.sh after some (minor)

Re: Namenode log - /hbase/.archive/table_name is non empty

2013-09-19 Thread Jason Huang
Thanks Ted and JM. Jason On Wed, Sep 18, 2013 at 6:46 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: But... if you can't update, then you will have to checkout the 0.94.3 version from SVN, apply the patch manually, build and re-deploy. Patch might be pretty easy to apply. JM

Hbase in embedded mode

2013-09-19 Thread samar.opensource
Hi Guys, Can we use HBase in a embedded more. So whole HBase should start in the same JVM and there should no RPC calls. Something like our embedded java dbs. Do we have some like this or something close to this. Regards, Samar

Re: Hbase in embedded mode

2013-09-19 Thread Ted Yu
See 2.2.1 in http://hbase.apache.org/book.html#standalone_dist On Sep 19, 2013, at 6:49 AM, samar.opensource samar.opensou...@gmail.com wrote: Hi Guys, Can we use HBase in a embedded more. So whole HBase should start in the same JVM and there should no RPC calls. Something like our

Re: openTSDB lose large amount of data when the client are writing

2013-09-19 Thread Jean-Daniel Cryans
Could happen if a region moves since locks aren't persisted, but if I were you I'd ask on the opentsdb mailing list first. J-D On Thu, Sep 19, 2013 at 10:09 AM, Tianying Chang tich...@ebaysf.com wrote: Hi, I have a customer who use openTSDB. Recently we found that only less than 10% data

Re: Hbase in embedded mode

2013-09-19 Thread samar kumar
Hi Ted I am aware of the stand alone mode but I was looking for something which will not have any ipc calls. everything should be a local api call. so no listen to ports. eg embed dbs like derby does. Regards Samar On 19 Sep 2013 19:20, Ted Yu yuzhih...@gmail.com wrote: See 2.2.1 in

Fwd: Stable version of Hadoop with Hbase

2013-09-19 Thread hadoop hive
-- Forwarded message -- From: hadoop hive hadooph...@gmail.com Date: Thu, Sep 19, 2013 at 1:02 AM Subject: Stable version of Hadoop To: u...@hadoop.apache.org Hi Folks, I want to use hbase for my data storage on the top of HDFS, Please help me to find out the best version which

Bulkload into empty table with configureIncrementalLoad()

2013-09-19 Thread Dolan Antenucci
I have about 1 billion values I am trying to load into a new HBase table (with just one column and column family), but am running into some issues. Currently I am trying to use MapReduce to import these by first converting them to HFiles and then using LoadIncrementalHFiles.doBulkLoad(). I also

Re: Bulkload into empty table with configureIncrementalLoad()

2013-09-19 Thread Jean-Daniel Cryans
You need to create the table with pre-splits, see http://hbase.apache.org/book.html#perf.writing J-D On Thu, Sep 19, 2013 at 9:52 AM, Dolan Antenucci antenucc...@gmail.comwrote: I have about 1 billion values I am trying to load into a new HBase table (with just one column and column family),

Re: openTSDB lose large amount of data when the client are writing

2013-09-19 Thread Stack
On Thu, Sep 19, 2013 at 10:09 AM, Tianying Chang tich...@ebaysf.com wrote: Hi, I have a customer who use openTSDB. Recently we found that only less than 10% data are written, rest are are lost. By checking the RS log, there are many row lock related issues, like below. It seems large amount

Re: Hbase in embedded mode

2013-09-19 Thread Enis Söztutar
Right now we do not have what you suggest. Eric has created an issue for this: https://issues.apache.org/jira/browse/HBASE-8016 I think it makes a lot of sense, especially enabling HRegion as a library to work on top of shared hdfs and building a simple layer to embed the client side, etc. The

openTSDB lose large amount of data when the client are writing

2013-09-19 Thread Tianying Chang
Hi, I have a customer who use openTSDB. Recently we found that only less than 10% data are written, rest are are lost. By checking the RS log, there are many row lock related issues, like below. It seems large amount of write to tsdb that need row lock caused the problem. Anyone else see

Re: Bulkload into empty table with configureIncrementalLoad()

2013-09-19 Thread Dolan Antenucci
Thanks J-D. Any recommendations on how to determine what splits to use? For the keys I'm using strings, so wasn't sure what to put for my startKey and endKey. For number of regions, I have a table pre-populated with the same data (not using bulk load), so I can see that it has 68 regions. On

storing custom bloomfilter/BitSet

2013-09-19 Thread John
Hi, Is there a way to store a custom BitSet for every row and add new bits while importing? I can't use the bloomfilter that is already there because in every columnnames are 2 elements. Here is my scenario: My table looks like this: rowKey1 - cf:data1,data2, cf:data3,data4, ... rowKey2 -

Stopping hbase results in core dump.

2013-09-19 Thread Kim Chew
Hello there, I use stop-hbase.sh to shut down HBase but I always got a core dump, stopping hbase./home/kchew/hbase-0.94.8/bin/stop-hbase.sh: line 58: 55477 Aborted (core dumped) nohup nice -n ${HBASE_NICENESS:-0} $HBASE_HOME/bin/hbase --config ${HBASE_CONF_DIR} master stop $@

Re: Stopping hbase results in core dump.

2013-09-19 Thread Jean-Marc Spaggiari
Hi Kim, Which java version are you using and which HBase version? JM 2013/9/19 Kim Chew kchew...@gmail.com Hello there, I use stop-hbase.sh to shut down HBase but I always got a core dump, stopping hbase./home/kchew/hbase-0.94.8/bin/stop-hbase.sh: line 58: 55477 Aborted

Re: HFile2 issue

2013-09-19 Thread Jean-Marc Spaggiari
So you should be on V2 already all over the place. No need to set it up. 2013/9/17 kun yan yankunhad...@gmail.com Thanks Jean-Marc。 Now I use HBase 0.94 version 2013/9/18 Jean-Marc Spaggiari jean-m...@spaggiari.org Hi Kuan, Are you migrating from a previous HBase version to 0.94? If

Re: Stopping hbase results in core dump.

2013-09-19 Thread Kim Chew
Hi Jean-Marc, JDK 1.7 and hbase-0.94.8 Kim On Thu, Sep 19, 2013 at 5:18 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi Kim, Which java version are you using and which HBase version? JM 2013/9/19 Kim Chew kchew...@gmail.com Hello there, I use stop-hbase.sh to shut

Re: Bulkload into empty table with configureIncrementalLoad()

2013-09-19 Thread Dolan Antenucci
To follow up on my previous question about how best to do the pre-splits, I ended up using to following when creating my table: admin.createTable(desc, Bytes.toBytes(0), Bytes.toBytes(2147483647), 100); This was somewhat of a stab in the dark, but I based it on

Re: Stopping hbase results in core dump.

2013-09-19 Thread Jean-Marc Spaggiari
Hi Kim, Oracle JDK? Or OpenJDK? Anything on the hbase .out file? JM 2013/9/19 Kim Chew kchew...@gmail.com Hi Jean-Marc, JDK 1.7 and hbase-0.94.8 Kim On Thu, Sep 19, 2013 at 5:18 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi Kim, Which java version are you using