Thanks J-D, I have opened a jira here:
https://issues.apache.org/jira/browse/HBASE-3826
On Thu, Apr 28, 2011 at 12:55 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
That make sense, would you mind opening a jira?
Thx,
J-D
On Tue, Apr 26, 2011 at 8:52 PM, Schubert Zhang zson
I have a busy region, and there are 43 StoreFiles (
compactionThreshold=8) in this region.
Now, I stopped the client and stopped put new data into it. I expect
these StoreFiles to be compacted later.
But, one day after, these 43 StoreFiles are still there.
It seems the compaction does not work
,
now, we have 19 storefiles. The minor compaction stopped.
I think, when a minor compaction complete, it should check if the number of
storefiles still many, if so, another minor compaction should start
continuiously.
Schubert Zhang
On Wed, Apr 27, 2011 at 9:32 AM, Schubert Zhang zson...@gmail.com
should determine why to do metaScan for each submit of puts.
On Thu, Feb 24, 2011 at 11:53 AM, Schubert Zhang zson...@gmail.com wrote:
Currently, with 0.90.1, this issue happen when there is only 8 regions in
each RS, and totally 64 regions in all totally 8 RS.
Ths CPU% of the client is very
On Sat, Jan 29, 2011 at 1:02 AM, Stack st...@duboce.net wrote:
On Thu, Jan 27, 2011 at 10:33 PM, Schubert Zhang zson...@gmail.com
wrote:
1. The .META. table seems ok
I can read my data table (one thread for reading).
I can use hbase shell to scan my data table.
And I can use
Currently, with 0.90.1, this issue happen when there is only 8 regions in
each RS, and totally 64 regions in all totally 8 RS.
Ths CPU% of the client is very high.
On Thu, Feb 24, 2011 at 10:55 AM, Schubert Zhang zson...@gmail.com wrote:
Now, I am trying the 0.90.1, but this issue is still
Source)
at com.bigdata.bench.hbase.HBaseWriter$Operator.run(Unknown
Source)
On Thu, Jan 27, 2011 at 12:06 AM, Schubert Zhang zson...@gmail.com
wrote:
Even though cannot put more data into table, I can read the existing
data.
And I stop and re-start the HBase, still cannot put
I am using 0.90.0 (8 RS + 1Master)
and the HDFS is CDH3b3
During the first hours of running, I puts many (tens of millions entites,
each about 200 bytes), it worked well.
But then, the client cannot put more data.
I checked all log files of hbase, no abnormal is found, I will continue to
check
-cloud:60020 1296057544138
requests=0, regions=32, usedHeap=126, maxHeap=8973
0 dead servers
Aggregate load: 174, regions: 255
On Wed, Jan 26, 2011 at 11:58 PM, Schubert Zhang zson...@gmail.com wrote:
I am using 0.90.0 (8 RS + 1Master)
and the HDFS is CDH3b3
During the first hours
the HBase when
create htable.
Schubert Zhang
10 matches
Mail list logo