Hi Harsh,
I have a question of your description. The deleted tag masks the new
inserted value with old timestamp, that's why the new inserted data
can'be seen. But after major compaction, this new value will be seen
again. So, the question is that how the deletion really executes. In
my
Cool. Now we have something on the records :-)
./Zahoor@iPad
On 15-Aug-2012, at 3:12 AM, Harsh J ha...@cloudera.com wrote:
Not wanting to have this thread too end up as a mystery-result on the
web, I did some tests. I loaded 10k rows (of 100 KB random chars each)
into test tables on 0.90 and
Yonghu,
You are correct at that. Until a major_compact finishes, inserting
with old timestamps will never show. Inserted old timestamped values
before a major compact but after a delete will all go away.
That is why I had to put in the data into the table _after_ the
major_compact ran, in that
Indeed. Wrote simple unit-test [1] and it fails.
And there's a JIRA for that also:
https://issues.apache.org/jira/browse/HBASE-4364. I added patch with the
simple unit-test that fails to it.
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
[1]
What happens if you paste that into the hive shell?
On Tue, Aug 14, 2012 at 11:05 AM, Omer, Farah fo...@microstrategy.comwrote:
Hi all,
I was testing hbase integrated with Hive, and running into an issue. Would
anyone has an idea what it means?
hbase(main):001:0 CREATE TABLE
I also have a short blog post about this here:
http://hadoop-hbase.blogspot.com/2011/12/deletion-in-hbase.html
From: Harsh J ha...@cloudera.com
To: user@hbase.apache.org
Sent: Wednesday, August 15, 2012 5:50 AM
Subject: Re: Put w/ timestamp - Deleteall - Put
On Wed, Aug 15, 2012 at 9:13 AM, lars hofhansl lhofha...@yahoo.com wrote:
I also have a short blog post about this here:
http://hadoop-hbase.blogspot.com/2011/12/deletion-in-hbase.html
I added link to this discussion into the Versioning section of our
reference guide (thanks all above).
On Tue, Aug 14, 2012 at 7:10 AM, David Koch ogd...@googlemail.com wrote:
Hello,
I created an Hbase table programatically like so:
String tableName = _myTable
HBaseAdmin admin = new HBaseAdmin(some_configuration);
if (admin.tableExists(outTable) == false) {
HTableDescriptor desc = new
On Tue, Aug 14, 2012 at 3:42 PM, David Koch ogd...@googlemail.com wrote:
Hello,
It's a fully-distributed environment (CDH3). Hbase hbck sometimes reports
inconsistencies like:
ERROR: Region { meta =
_myTable,,1344936991240.979b3fe3ced9016372a82b7af5d33c27.,
hdfs = null, deployed =
On Mon, Aug 13, 2012 at 6:05 PM, anil gupta anilgupt...@gmail.com wrote:
It would be great if you can answer this simple question of mine: Is HBase
Bulk Loading fault tolerant to Region Server failures in a viable/decent
environment?
Bulk Loading is an MapReduce job. Bulk Loading is as
On Mon, Aug 13, 2012 at 6:10 PM, Gurjeet Singh gurj...@gmail.com wrote:
I am beginning to think that this is a configuration issue on my
cluster. Do the following configuration files seem sane ?
hbase-env.sh https://gist.github.com/3345338
Nothing wrong w/ this (Remove the -ea, you don't
Hi Stack,
Thanks for answering my question. I admit that i am unable to run
MR2(YARN) job in an efficient way on my cluster due to a major bug in YARN
which is not letting me set the right configuration for MapReduce jobs.
The RS's are dying with LeaseExpiredExceptions or YouAreDeadException
On Mon, Aug 13, 2012 at 1:09 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
I've decided to write an end-to-end Installation guide for HBase, which also
includes HDFS, user configuration and tons of other stuff no guide ever
mentions, in a blog post:
On Mon, Aug 13, 2012 at 11:57 AM, Julian Wissmann
julian.wissm...@sdace.de wrote:
In the regionserver log, I get the following output ten times:
2012-08-13 20:51:59,779 ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: java.lang.NullPointerException
at
Maybe related to https://issues.apache.org/jira/browse/HBASE-5821 which
would be in 0.92.3
Cheers
On Wed, Aug 15, 2012 at 3:23 PM, Stack st...@duboce.net wrote:
On Mon, Aug 13, 2012 at 11:57 AM, Julian Wissmann
julian.wissm...@sdace.de wrote:
In the regionserver log, I get the following
On Mon, Aug 13, 2012 at 9:33 AM, Stack st...@duboce.net wrote:
On Mon, Aug 13, 2012 at 5:08 PM, Jacques whs...@gmail.com wrote:
I was thinking that an easier way might even be to just add the conversion
capability at the ruby shell level. Something like the following where you
can give a
Yeah... It looks OK.
Maybe 2G of heap is a bit low when dealing with 200.000 column rows.
If you can I'd like to know how busy your regionservers are during these
operations. That would be an indication on whether the parallelization is good
or not.
-- Lars
- Original Message -
Hi
Just to add on, The HLog is just an edit log. Any transaction updates(
Puts/Deletes) are just added to HLog. It is the Scanner that takes care of
the TTL part which is calculated from the TTL configured at the column
family(Store) level.
Regards
Ram
-Original Message-
From:
18 matches
Mail list logo