Your only chance is to run a major compaction on your table - that will get rid
of the delete marker. Then you can re-add the Put with the same TS.
-- Lars
ps. Rereading my email below... At some point I will learn to proof-read my
emails before I send them full of grammatical errors.
-
As Lars mentioned, the row is not physically deleted. The way which
Hbase uses is to insert a cell called tombstone which is used to
mask the deleted value, but value is still there (if the deleted value
is in the same memstore with tombstone, it will be deleted in the
memstore, so you will not
I guess Sesame Street isn't global... ;-) oh and of course I f'd the joke by
saying Grover and not Oscar so it's my bad. :-(. [Google Oscar the groutch, and
you'll understand the joke that I botched]
Its most likely GC and a mis tuned cluster.
The OP doesn't really get in to detail, except to
Uhm... Not exactly Lars...
Just my $0.02 ...
While I don't disagree w Lars, I think the question you have to ask is why is
the time stamp important?
Is it an element of the data or is it an artifact?
This kind of gets in to your Schema design and taking short cuts. You may want
to instead
Hi,
I have set TTL in hbase table due to which the data is cleared after
specified time, but the regions are not cleared even as the data inside the
regions are cleared. Can someone please let me know if I am missing
anything.
Thanks
Ajay
Hi
am new to hbase and hadoop. I want to install hbase and to work with hbase
writing mapreduce jobs for data in hbase. I installed hbase. It works well
in standalone mode but dont start master and zookeeper properly on
pseudodistributed mode.
kindly help to resolve this problem.
Thanks
--
any error msg?
On Wed, Apr 25, 2012 at 7:02 PM, shehreen shehreen_cute...@hotmail.comwrote:
Hi
am new to hbase and hadoop. I want to install hbase and to work with hbase
writing mapreduce jobs for data in hbase. I installed hbase. It works well
in standalone mode but dont start master and
Hi there-
In addition to what was said about GC, you might want to double-check
this...
http://hbase.apache.org/book.html#performance
... as well as this case-study for performance troubleshooting
http://hbase.apache.org/book.html#casestudies.perftroub
On 4/24/12 9:58 PM, Michael Segel
Thank you Gary..! Now i understood the actual method.
On Wed, Apr 25, 2012 at 11:36 AM, Gary Helmling ghelml...@gmail.com wrote:
Hi Vamshi,
See the ConstraintProcessor coprocessor that was added for just this
kind of case:
Thanks yonghu.
That is HBASE-4241.
One small point: The deleted rows are not deleted from the memstore, but rather
not included when the memstore is flushed to disk.
-- Lars
- Original Message -
From: yonghu yongyong...@gmail.com
To: user@hbase.apache.org; lars hofhansl
As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90
(only need to run upgrade script if upgrading to 0.92).
http://hbase.apache.org/book/upgrading.html#upgrade0.90
Anyone has experience in upgrading from 0.20 to 0.90 or something similar
with major upgrade ? Do we need to
just foll this
http://hbase.apache.org/book/standalone_dist.html
On Wed, Apr 25, 2012 at 7:05 PM, Nitin Pawar nitinpawar...@gmail.comwrote:
any error msg?
On Wed, Apr 25, 2012 at 7:02 PM, shehreen shehreen_cute...@hotmail.com
wrote:
Hi
am new to hbase and hadoop. I want to install
Change 127.0.1.1 in your /etc/hosts file to 127.0.0.1...also add the
hadoop-core.jar from hadoop folder and commons-configuration.jar from the
hadoob/lib to the hbae/lib folder.
On Apr 25, 2012 11:59 PM, shashwat shriparv dwivedishash...@gmail.com
wrote:
just foll this
check out this too seems to make it work, do what tariq has suggested too
http://ria101.wordpress.com/2010/01/28/setup-hbase-in-pseudo-distributed-mode-and-connect-java-client/
On Thu, Apr 26, 2012 at 1:05 AM, Mohammad Tariq donta...@gmail.com wrote:
Change 127.0.1.1 in your /etc/hosts file
On Wed, Apr 25, 2012 at 11:14 AM, David Charle dbchar2...@gmail.com wrote:
As per the docs, it looks like painless to upgrade from 0.20.3 to 0.90
(only need to run upgrade script if upgrading to 0.92).
http://hbase.apache.org/book/upgrading.html#upgrade0.90
Anyone has experience in upgrading
Hi,
2012-04-14 klockan 21:07 skrev Rob Verkuylen:
As far as I understand sequential keys with a timerange scan have the best
read performance possible, because of the HFile metadata, just as N
indicates. Maybe adding Bloomfilters can further up the performance.
As far I understand it, Bloom
Hi,
as
far as I know TTL as well as deletions just take effect on major
compaction. (see http://hbase.apache.org/book.html#regions.arch -
8.7.5.5)
regards
Christian
Von: ajay.bhosle ajay.bho...@zapak.co.in
An: user@hbase.apache.org
Gesendet: 14:33 Mittwoch, 25.April 2012
We use ec2 and cdh as well and have around 80 Hadoop/hbase nodes deployed
across a few different clusters. We use a combination of puppet for package
management and fabric scripts for pushing configs and managing services.
Our base AMI is a pretty bare centos6 install and puppet handles most
18 matches
Mail list logo