I run into same issue. I tried check_meta.rb --fix and add_table.rb, and
still get the same hbck inconsistent table,
however, I am able to do a rowcount for the table and there is no problem.
Jimmy
--
From: Geoff Hendrey ghend...@decarta.com
.
--
From: Geoff Hendrey ghend...@decarta.com
Sent: Thursday, August 11, 2011 2:21 PM
To: Jinsong Hu jinsong...@hotmail.com; user@hbase.apache.org
Subject: RE: corrupt .logs block
Hey -
Our table behaves fine until we try to do a mapreduce job that reads
.
--
From: Stack st...@duboce.net
Sent: Wednesday, May 25, 2011 10:03 AM
To: user@hbase.apache.org
Subject: Re: hbase hbck error
On Wed, May 25, 2011 at 9:18 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
I tried several other non-hbase machines that has proper
Hi,
today I run hbase hbck to check our production cluster and dev cluster,
the production cluster comes out clean, but
in our dev cluster, I have seem more than 2K errors like this:
ERROR: Region
HEARTBEAT_MASTERPATCH,time\x09daily\x092010-08-15\x09uobkayhian_pr
us a clue.
J-D
On Mon, May 23, 2011 at 10:29 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi,
today I run hbase hbck to check our production cluster and dev
cluster,
the production cluster comes out clean, but
in our dev cluster, I have seem more than 2K errors like this:
ERROR: Region
You probably should stop all master/regionservers, then start one master,
tail -f the log to confirm all the hlogs are handled,
then start the first regionserver, and then other regionservers.
I have encountered this issues before.
hbase is not as good as what you want, but not as bad as you
Hi, There:
We have a hadoop/hbase cluster with 6 regionservers, double as task tracker
and datanodes. They have 8G and 4x0.5T disk.
I am using cdh3b2 distribution.
I noticed that when the load is small, everything is happy. However, when we
push enough data continuously to hbase and
run
are you running task tracker and region server on the same machine ?
both are CPU intensive.
Jimmy
--
From: Bishal Acharya bacha...@veriskhealth.com
Sent: Thursday, September 23, 2010 10:11 PM
To: user@hbase.apache.org
Subject: How to manage
of the spectrum, and persistence may not be something you want.
- Andy
From: Jinsong Hu jinsong...@hotmail.com
Subject: lack of region merge cause in_memory option trouble
To: user@hbase.apache.org
Date: Friday, September 17, 2010, 2:53 PM
Hi,
I was trying to find out if the hbase can be used in
real
seem the major_compaction gap to be 1 day or 8 days for the
same table.
Just FYI.
Jimmy.
--
From: Jinsong Hu jinsong...@hotmail.com
Sent: Thursday, September 16, 2010 10:31 AM
To: user@hbase.apache.org
Subject: Re: hbase doesn't delete data older
Hi,
I was trying to find out if the hbase can be used in real-time processing
scenario. In order to
do so, I set the in_memory for a table to be true, and set the TTL for the
table to 10 minuets.
The data comes in chronnological order. I let the test to run for 1 day.
The idea is that we are
On Wed, Sep 15, 2010 at 10:43 PM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Stack:
Thanks for the explanation. I looked at the code and it seems that the
old
region should get compacted
and data older than TTL will get removed. I will do a test with a table
with
10 min TTL , and insert
I have tested the TTL for hbase and found that it relies on compaction to
remove old data . However, if a region has data that is older
than TTL, and there is no trigger to compact it, then the data will remain
there forever, wasting disk space and memory.
It appears at this state, to really
, Jinsong Hu jinsong...@hotmail.com
wrote:
I have tested the TTL for hbase and found that it relies on compaction to
remove old data . However, if a region has data that is older
than TTL, and there is no trigger to compact it, then the data will
remain
there forever, wasting disk space and memory
are tracking min/max timestamps in storefiles too, so it's possible
that we could expire some files of a region as well, even if the region
was not completely expired.
Jinsong, mind filing a jira?
JG
-Original Message-
From: Jinsong Hu [mailto:jinsong...@hotmail.com]
Sent: Wednesday
marker, then a compaction actually purges
the data itself.
-ryan
On Wed, Sep 15, 2010 at 11:26 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
I opened a ticket https://issues.apache.org/jira/browse/HBASE-2999 to
track
issue. dropping old store , and update the adjacent region's key range
when
all
removed
or not.
Jimmy.
--
From: Stack st...@duboce.net
Sent: Wednesday, September 15, 2010 9:53 PM
To: user@hbase.apache.org
Subject: Re: hbase doesn't delete data older than TTL in old regions
On Wed, Sep 15, 2010 at 5:50 PM, Jinsong Hu jinsong
Hi,
I have noticed that my hbase client is complaining that
Trying to contact region server m0002036.ppops.net:60020 for region
HEARTBEAT_CLUSTER,,1279930490584.a84b7160318e91ef51c7efd215cd7e46., row
'time\x09weekly\x092010W30', but failed after 10 attempts.
I then go to the hbase master
I found out that you need to do this:
1. stop the client , and wait enough time, so all the hlog records are in
the regions.
2. remove the records in .META. table . there is no untility there and I
wrote a program to do so.
3. shutdown hbase, then remove the /hbase/xxx directory from HDFS.
4.
I do a scan of .META,
the table that I want to truncate has a lot of entries in .META.
Also, Can this exception be avoided in future when I have to truncate a
table ?
-Original Message-
From: Jinsong Hu [mailto:jinsong...@hotmail.com]
Sent: Monday, September 13, 2010 1:26 PM
Hi,
I want to find out what is the unit for TTL in hbase. I googled around and
found some people say it is microsecond.
and I thought it was millisecond as that is java default. Then I searched
hbase code and saw some test code treating
the unit to be seconds.
I used a TTL=60. if the
@hbase.apache.org
Subject: Re: thrift for hbase in CDH3 broken ?
Jinsong Hu wrote:
I tried, this doesn't work. I noticed
$transport-open();
is missing in this code. so I added it.
Yup. Sorry about that. Copy and paste error :(
following code first successfully print all tables, then in the line
*never see* an expired cell. This does not mean it does not
still exist on disk, it means it will not be visible in user queries. On
a major compaction (default every 24 hours) HBase will actually delete the
expired cells.
JG
-Original Message-
From: Jinsong Hu [mailto:jinsong
.
Jimmy.
--
From: Stack st...@duboce.net
Sent: Wednesday, September 08, 2010 8:54 PM
To: user@hbase.apache.org
Subject: Re: how to remove dangling region and table?
On Wed, Sep 8, 2010 at 6:05 PM, Jinsong Hu jinsong...@hotmail.com wrote:
Hi,
I
?
Jimmy.
--
From: Igor Ranitovic irani...@gmail.com
Sent: Tuesday, September 07, 2010 8:18 PM
To: user@hbase.apache.org
Subject: Re: thrift for hbase in CDH3 broken ?
Jinsong Hu wrote:
I tried, this doesn't work. I noticed
$transport-open();
is missing
-client-examples - just wrote
this example and tested it in our cluster, works as expected.
For this to work you'd need to install rubygems and thrift gem (gem
install thrift).
On Fri, Sep 3, 2010 at 12:01 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
Can you send me some ruby test code and so I can try
12:31 AM
To: user@hbase.apache.org
Subject: Re: thrift for hbase in CDH3 broken ?
yes, Centos 5.5 + CDH3b2
On Fri, Sep 3, 2010 at 3:26 AM, Jinsong Hu jinsong...@hotmail.com wrote:
are you using CDH3 distribution ?
Jinsong
--
From: Alexey Kovyrin
I noticed that CDH3 has 2 executable
/usr/bin/hbase
/usr/lib/hbase/bin/hbase
I compared them and they are different. it turns out that I run
/usr/bin/hbase shell
and then list table, it works, but if I run
/usr/lib/hbase/bin/hbase shell
and list tables, it freezes. In the next
Hi, There,
I am trying to test and see if thrift for hbase works. I followed the
example from
http://www.workhabit.com/labs/centos-55-and-thriftscribe
http://incubator.apache.org/thrift/
http://wiki.apache.org/hadoop/Hbase/ThriftApi
and wrote test code: I found that client.getTableNames();
of HBase.
You can verify the ports using a command like /sbin/fuser -n tcp 9090 to
see which pid has it open, then cross reference against sudo jps.
Thanks
-Todd
On Thu, Sep 2, 2010 at 4:40 PM, Jinsong Hu jinsong...@hotmail.com wrote:
Hi, There,
I am trying to test and see if thrift for hbase
and
can go higher if necessary.
JG
-Original Message-
From: Jinsong Hu [mailto:jinsong...@hotmail.com]
Sent: Friday, August 27, 2010 10:03 AM
To: user@hbase.apache.org
Subject: how many regions a regionserver can support
Hi, There :
Does anybody know how many region a regionserver
Hi, Team:
I have noticed that the truncate/drop table with large amount of data
fails and actually corrupt the hbase. In the worse case, we can't even
create the table with the same name any more and I was forced to dump the
whole hbase records and recreate all tables again.
I noticed there
). When trying to reach high density on your nodes, be sure
to compress your data and set the split size bigger than the default
of 256MB or you'll end up with too many regions.
J-D
On Wed, Sep 1, 2010 at 11:21 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
I did a testing with 6 regionserver
-D
On Wed, Sep 1, 2010 at 11:28 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Team:
I have noticed that the truncate/drop table with large amount of data
fails
and actually corrupt the hbase. In the worse case, we can't even
create the table with the same name any more and I was forced
-D
On Wed, Sep 1, 2010 at 11:28 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Team:
I have noticed that the truncate/drop table with large amount of data
fails
and actually corrupt the hbase. In the worse case, we can't even
create the table with the same name any more and I was forced
Hi, There :
Does anybody know how many region a regionserver can support ? I have
regionservers with 8G ram and 1.5T disk and 4 core CPU.
I searched http://www.facebook.com/note.php?note_id=142473677002 and they
say google target is 100 regions of 200M for each
regionserver.
In my case, I
Hi, There:
does anybody know of a good combination of centos version and jdk version
that works stably ? I am using centos version
Linux 2.6.18-194.8.1.el5.centos.plus #1 SMP Wed Jul 7 11:45:38 EDT 2010
x86_64 x86_64 x86_64 GNU/Linux
jdk version
Java(TM) SE Runtime Environment (build
Hi, There:
I am using cloudera cdh3 regionserver and today, I noticed that one of the
regionserver memory usage is very high:
request=8.8, regions=60, stores=61, storefiles=67, storefileIndexSize=142,
memstoreSize=58, compactionQueueSize=0, usedHeap=5869, maxHeap=6127,
can post more of your GC log.
On Mon, Aug 9, 2010 at 3:02 PM, Jinsong Hu jinsong...@hotmail.com wrote:
Hi, There:
I am using cloudera cdh3 regionserver and today, I noticed that one of
the
regionserver memory usage is very high:
request=8.8, regions=60, stores=61, storefiles=67
Hi, There:
I got some YouAreDeadException with hbase. what can cause it ? I do
notice between 5:49 to 5:53 ,
for 4 minutes, there is no log. This doesn't look like GC issue as I checked
the GC log, the longest GC
is only 9.6 seconds.
Jimmy.
2010-07-16 05:49:26,805 DEBUG
Hi, There:
I got some YouAreDeadException with hbase. what can cause it ? I do
notice between 5:49 to 5:53 ,
for 4 minutes, there is no log. This doesn't look like GC issue as I checked
the GC log, the longest GC
is only 9.6 seconds.
Jimmy.
2010-07-16 05:49:26,805 DEBUG
session but has yet to realize it.
St.Ack
On Thu, Jul 15, 2010 at 11:52 PM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, There:
I got some YouAreDeadException with hbase. what can cause it ? I do
notice
between 5:49 to 5:53 ,
for 4 minutes, there is no log. This doesn't look like GC issue as I
=-XX:NewSize=6m -XX:MaxNewSize=6m cms options from
above gc logging options from above
From our wiki about performance tuning.
On Fri, Jul 16, 2010 at 1:50 PM, Jinsong Hu jinsong...@hotmail.com
wrote:
I was doing stress testing, so the load is not small. But I purposely
limited the data rate on client
side
Hi, Todd:
I downloaded hadoop-0.20.2+320 and hbase-0.89.20100621+17 from CDH3 and
inserted data with full load, after a while the hbase regionserver crashed.
I checked system with iostat -x 5 and notice the disk is pretty busy.
Then I modified my client code and reduced the insertion rate by
, Jul 13, 2010 at 2:49 PM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Todd:
I downloaded hadoop-0.20.2+320 and hbase-0.89.20100621+17 from CDH3 and
inserted data with full load, after a while the hbase regionserver
crashed.
I checked system with iostat -x 5 and notice the disk is pretty busy
around that.
J-D
On Tue, Jul 13, 2010 at 2:49 PM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Todd:
I downloaded hadoop-0.20.2+320 and hbase-0.89.20100621+17 from CDH3
and
inserted data with full load, after a while the hbase regionserver
crashed.
I checked system with iostat -x 5
HFileOutputFormat?
Thx,
J-D
On Thu, Jul 1, 2010 at 11:52 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
Hi, Sir:
I am using hbase 0.20.5 and this morning I found that 3 of my region
server running out of memory.
the regionserver is given 6G memory each, and on average, I have 653
regions
in total
1, 2010 at 5:01 PM, Jinsong Hu jinsong...@hotmail.com wrote:
Hi, Jean:
Thanks! I will run the add_table.rb and see if it fixes the problem.
Our namenode is backed up with HA and DRBD, and the hbase master
machine
colocates with name node , job tracker so we are not wasting resources
the master from assigning -ROOT-, it should be pretty evident
by looking at the master log.
J-D
On Thu, Jul 1, 2010 at 5:23 PM, Jinsong Hu jinsong...@hotmail.com wrote:
After I run the add_table.rb, I refreshed the master's UI page, and then
clicked on the table to show the regions. I expect
your logfile say? Search for the string (case
insensitive) blocking updates...
-ryan
On Wed, Jun 9, 2010 at 11:52 AM, Jinsong Hu jinsong...@hotmail.com
wrote:
I made this change
property
namehbase.hstore.blockingStoreFiles/name
value15/value
/property
the system is still slow.
Here
, Jinsong Hu jinsong...@hotmail.com
wrote:
I checked the log, there are lots of
e 128.1m is = than blocking 128.0m size
2010-06-09 17:26:36,736 INFO
org.apache.hadoop.hbase.regionserver.HRegion:
Block
ing updates for 'IPC Server handler 8 on 60020' on region
Spam_MsgEventTable,201
0-06-09 05:25:32
--
From: Jinsong Hu jinsong...@hotmail.com
Sent: Wednesday, June 09, 2010 1:59 PM
To: user@hbase.apache.org
Subject: Re: ideas to improve throughput of the base writting
Thanks. I will make this change:
property
namehbase.hregion.memstore.block.multiplier/name
value8/value
/property
property
config. the regionserver has 4G ram. what else can be
wrong ?
The insertion rate is still not good.
Jimmy.
--
From: Jinsong Hu jinsong...@hotmail.com
Sent: Wednesday, June 09, 2010 1:59 PM
To: user@hbase.apache.org
Subject: Re: ideas to improve
is not keeping up with your rate of data input. How big
are your records? What is your target input speed? Have you done
anything on this page:
http://wiki.apache.org/hadoop/PerformanceTuning
On Wed, Jun 9, 2010 at 4:58 PM, Jinsong Hu jinsong...@hotmail.com wrote:
My hardware has 2 disks. I
54 matches
Mail list logo