Re: Problem to Insert the row that i was deleted

2012-04-25 Thread lars hofhansl
Your only chance is to run a major compaction on your table - that will get rid of the delete marker. Then you can re-add the Put with the same TS. -- Lars ps. Rereading my email below... At some point I will learn to proof-read my emails before I send them full of grammatical errors. -

Re: Problem to Insert the row that i was deleted

2012-04-25 Thread yonghu
As Lars mentioned, the row is not physically deleted. The way which Hbase uses is to insert a cell called tombstone which is used to mask the deleted value, but value is still there (if the deleted value is in the same memstore with tombstone, it will be deleted in the memstore, so you will not

Re: Hbase Quality Of Service: large standarad deviation in insert time while inserting same type of rows in Hbase

2012-04-25 Thread Michel Segel
I guess Sesame Street isn't global... ;-) oh and of course I f'd the joke by saying Grover and not Oscar so it's my bad. :-(. [Google Oscar the groutch, and you'll understand the joke that I botched] Its most likely GC and a mis tuned cluster. The OP doesn't really get in to detail, except to

Re: Problem to Insert the row that i was deleted

2012-04-25 Thread Michel Segel
Uhm... Not exactly Lars... Just my $0.02 ... While I don't disagree w Lars, I think the question you have to ask is why is the time stamp important? Is it an element of the data or is it an artifact? This kind of gets in to your Schema design and taking short cuts. You may want to instead

Regions not cleared

2012-04-25 Thread ajay.bhosle
Hi, I have set TTL in hbase table due to which the data is cleared after specified time, but the regions are not cleared even as the data inside the regions are cleared. Can someone please let me know if I am missing anything. Thanks Ajay

hbase installation

2012-04-25 Thread shehreen
Hi am new to hbase and hadoop. I want to install hbase and to work with hbase writing mapreduce jobs for data in hbase. I installed hbase. It works well in standalone mode but dont start master and zookeeper properly on pseudodistributed mode. kindly help to resolve this problem. Thanks --

Re: hbase installation

2012-04-25 Thread Nitin Pawar
any error msg? On Wed, Apr 25, 2012 at 7:02 PM, shehreen shehreen_cute...@hotmail.comwrote: Hi am new to hbase and hadoop. I want to install hbase and to work with hbase writing mapreduce jobs for data in hbase. I installed hbase. It works well in standalone mode but dont start master and

Re: Hbase Quality Of Service: large standarad deviation in insert time while inserting same type of rows in Hbase

2012-04-25 Thread Doug Meil
Hi there- In addition to what was said about GC, you might want to double-check this... http://hbase.apache.org/book.html#performance ... as well as this case-study for performance troubleshooting http://hbase.apache.org/book.html#casestudies.perftroub On 4/24/12 9:58 PM, Michael Segel

Re: Integrity constraints

2012-04-25 Thread Vamshi Krishna
Thank you Gary..! Now i understood the actual method. On Wed, Apr 25, 2012 at 11:36 AM, Gary Helmling ghelml...@gmail.com wrote: Hi Vamshi, See the ConstraintProcessor coprocessor that was added for just this kind of case:

Re: Problem to Insert the row that i was deleted

2012-04-25 Thread lars hofhansl
Thanks yonghu. That is HBASE-4241. One small point: The deleted rows are not deleted from the memstore, but rather not included when the memstore is flushed to disk. -- Lars - Original Message - From: yonghu yongyong...@gmail.com To: user@hbase.apache.org; lars hofhansl

0.20 to 0.90 upgrade

2012-04-25 Thread David Charle
As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90 (only need to run upgrade script if upgrading to 0.92). http://hbase.apache.org/book/upgrading.html#upgrade0.90 Anyone has experience in upgrading from 0.20 to 0.90 or something similar with major upgrade ? Do we need to

Re: hbase installation

2012-04-25 Thread shashwat shriparv
just foll this http://hbase.apache.org/book/standalone_dist.html On Wed, Apr 25, 2012 at 7:05 PM, Nitin Pawar nitinpawar...@gmail.comwrote: any error msg? On Wed, Apr 25, 2012 at 7:02 PM, shehreen shehreen_cute...@hotmail.com wrote: Hi am new to hbase and hadoop. I want to install

Re: hbase installation

2012-04-25 Thread Mohammad Tariq
Change 127.0.1.1 in your /etc/hosts file to 127.0.0.1...also add the hadoop-core.jar from hadoop folder and commons-configuration.jar from the hadoob/lib to the hbae/lib folder. On Apr 25, 2012 11:59 PM, shashwat shriparv dwivedishash...@gmail.com wrote: just foll this

Re: hbase installation

2012-04-25 Thread shashwat shriparv
check out this too seems to make it work, do what tariq has suggested too http://ria101.wordpress.com/2010/01/28/setup-hbase-in-pseudo-distributed-mode-and-connect-java-client/ On Thu, Apr 26, 2012 at 1:05 AM, Mohammad Tariq donta...@gmail.com wrote: Change 127.0.1.1 in your /etc/hosts file

Re: 0.20 to 0.90 upgrade

2012-04-25 Thread Stack
On Wed, Apr 25, 2012 at 11:14 AM, David Charle dbchar2...@gmail.com wrote: As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90 (only need to run upgrade script if upgrading to 0.92). http://hbase.apache.org/book/upgrading.html#upgrade0.90 Anyone has experience in upgrading

Re: TIMERANGE performance on uniformly distributed keyspace

2012-04-25 Thread Wouter Bolsterlee
Hi, 2012-04-14 klockan 21:07 skrev Rob Verkuylen: As far as I understand sequential keys with a timerange scan have the best read performance possible, because of the HFile metadata, just as N indicates. Maybe adding Bloomfilters can further up the performance. As far I understand it, Bloom

Re: Regions not cleared

2012-04-25 Thread Christian Schäfer
Hi, as far as I know TTL as well as deletions just take effect on major compaction. (see http://hbase.apache.org/book.html#regions.arch - 8.7.5.5) regards Christian Von: ajay.bhosle ajay.bho...@zapak.co.in An: user@hbase.apache.org Gesendet: 14:33 Mittwoch, 25.April 2012

Re: HBase, CDH3U2, EC2

2012-04-25 Thread Bryan Beaudreault
We use ec2 and cdh as well and have around 80 Hadoop/hbase nodes deployed across a few different clusters. We use a combination of puppet for package management and fabric scripts for pushing configs and managing services. Our base AMI is a pretty bare centos6 install and puppet handles most