Hm, thanks, I'll check..
2015-01-06 23:31 GMT+03:00 Stack :
> The threads that are sticking around are tomcat threads out of a tomcat
> executor pool. IIRC, your server has high traffic. The pool is running up
> to 800 connections on occasion and taking a while to die back down?
> Googling, seem
Hi Nick,
This is output of the command "java -version"
$ java -version
java version "1.7.0_60"
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)
We have been using G1GC for quite a long time.
On Wed, Jan 7, 2015 at 1:19 PM, Ni
I don't think there's many folks using G1 in production for HBase yet. Out
of curiosity, what JVM and version are you using? I heard G1 got much
better somewhere after 1.7u60, though I don't have personal experience with
it.
On Tuesday, January 6, 2015, Shuai Lin wrote:
> Cool, how can I get a g
Cool, how can I get a graph like that?
On Wed, Jan 7, 2015 at 4:06 AM, Otis Gospodnetic wrote:
> Hi,
>
> The first thing I'd want to know is which memory poor is getting filled.
> There are several in the JVM.
> Here's an example: https://apps.sematext.com/spm-reports/s/kZgBWLsJRd
> (this
> one
Yeah, I've read bunches of articles on java GC, including the famous one
you mentioned. We don't pass any specific GC params to JVM, except the
"-XX:+UseG1GC" flag.
On Wed, Jan 7, 2015 at 2:57 AM, Stack wrote:
> On Tue, Jan 6, 2015 at 3:32 AM, Shuai Lin wrote:
>
> > Hi all,
> >
> > We have a hb
Yeah, I know a heap dump would work, but I'm a little worried about dumping
22GB of data on a production server, since it could take quite a while, and
make the recovery more slower.
On Wed, Jan 7, 2015 at 10:51 AM, 谢良 wrote:
> Could you retry with " -XX:+HeapDumpOnOutOfMemoryError" ?
> the hea
I just pushed a feature branch for an AsciiDoc POC, called HBASE-11533. See
the JIRA too. Please check it out.
You can do mvn clean pre-site, and look at the docs in target/docs. You
will get a failure but that's nothing to do with this. If you run the
following, you should get no failure, and you
Thanks Ted , I didn't notice that ;P
2015-01-07 11:47 GMT+08:00 Ted Yu :
> In master and branch-1 branches, there is no 'GetResponse get()' method in
> HRegionServer anymore.
>
> FYI
>
> On Tue, Jan 6, 2015 at 7:26 PM, Bi,hongyu—mike wrote:
>
> > Hi Ted,
> >
> > KeyOnlyFilter may improve the sca
In master and branch-1 branches, there is no 'GetResponse get()' method in
HRegionServer anymore.
FYI
On Tue, Jan 6, 2015 at 7:26 PM, Bi,hongyu—mike wrote:
> Hi Ted,
>
> KeyOnlyFilter may improve the scan speed but I don't think the scan may
> finish less than leaseTimeout in such case;
> From
Hi Ted,
KeyOnlyFilter may improve the scan speed but I don't think the scan may
finish less than leaseTimeout in such case;
>From the HRegionServer#get I see:
HRegion region = getRegion(regionName); here getRegion may throw
NotServingRegionException that is need by isSuccessfulScan;
and HRegio
For isSuccessfulScan(), I see:
scan.setBatch(1)
scan.setCaching(1)
scan.setFilter(FirstKeyOnlyFilter.new())
How about adding a KeyOnlyFilter as well ?
On Tue, Jan 6, 2015 at 6:37 PM, Bi,hongyu—mike wrote:
> Thanks Ted,
> Finally I resolved the issue, the RC is :region_mover will call
> i
Could you retry with " -XX:+HeapDumpOnOutOfMemoryError" ?
the heap dump will make the thing clear
发件人: Shuai Lin
发送时间: 2015年1月6日 19:32
收件人: user@hbase.apache.org
主题: Region Server OutOfMemory Error
Hi all,
We have a hbase cluster of 5 region servers, each
Thanks Ted,
Finally I resolved the issue, the RC is :region_mover will call
isSuccessfulScan to scan the startkey of the moved region which filled with
lots of expired cells,so it seems scan hang;
I think isSuccessfulScan is just to test whether the moved region is
readable or not, why not to use g
The threads that are sticking around are tomcat threads out of a tomcat
executor pool. IIRC, your server has high traffic. The pool is running up
to 800 connections on occasion and taking a while to die back down?
Googling, seems like this issue comes up frequently enough. Try it
yourself. If you
Hi, yes, it was me.
I've followed advices, ZK connections on server side are stable.
Here is current state of Tomcat:
http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
There are more than 800 threads and daemon threads.
and the state of three ZK servers:
http://bigdatapat
Hi,
The first thing I'd want to know is which memory poor is getting filled.
There are several in the JVM.
Here's an example: https://apps.sematext.com/spm-reports/s/kZgBWLsJRd (this
one is actually from an HBase cluster). If you see any of the lines at
100% that's potential trouble. If it stays
The call-for-papers for hbasecon2015 has been posted: see
http://www.hbasecon.com/ Submit a talk on any topic you think your
fellow-hbasers would be interested in hearing about. Your program committee
are game for anything, from the mundane to "crazy pants". Ping one of the
program committee if you
On Tue, Jan 6, 2015 at 3:32 AM, Shuai Lin wrote:
> Hi all,
>
> We have a hbase cluster of 5 region servers, each, each hosting 60+
> regions.
>
> But under heavy load the region servers crashes for OOME now and then:
>
>
What exact message do you get when the OOME happens? What GC params do you
p
Thanks Stack,
Will check the logs for reason. I'm only disabling compaction during
dynamic splits (~10 mins), so it's acceptable in my case.
Thanks,
Jianshi
On Wed, Jan 7, 2015 at 1:37 AM, Stack wrote:
> On Mon, Jan 5, 2015 at 11:00 PM, Jianshi Huang
> wrote:
>
> > Hi,
> >
> > Firstly, I foun
On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak
wrote:
> yes, one of them (random) gets more connections than others.
>
> 9.3.1.1 Is OK.
> I have 1 HConnection for logical module per application and each
> ServletRequest gets it's own HTable. HTable closed each tme after
> ServletRequest is done.
On Mon, Jan 5, 2015 at 11:00 PM, Jianshi Huang
wrote:
> Hi,
>
> Firstly, I found it strange that when I added a new split to a table and do
> admin.move, it will trigger a MAJOR compaction for the whole table.
>
Usually, a compactions says what provoked it in the log and why it a major
compactio
Ah! I need run
admin.modifyTable(tableNameBytes, tableDescriptor)
Will try it soon...
Jianshi
On Tue, Jan 6, 2015 at 11:12 PM, Ted Yu wrote:
> This is what setCompactionEnabled() does:
>
> public HTableDescriptor setCompactionEnabled(final boolean isEnable) {
>
> setValue(COMPACTION_E
This is what setCompactionEnabled() does:
public HTableDescriptor setCompactionEnabled(final boolean isEnable) {
setValue(COMPACTION_ENABLED_KEY, isEnable ? TRUE : FALSE);
return this;
FYI
On Mon, Jan 5, 2015 at 11:00 PM, Jianshi Huang
wrote:
> Hi,
>
> Firstly, I found it strange t
Can you pastebin region server log ?
When the scan is being performed, can you get jstack and pastebin it ?
0.94.15 was an old release, any chance of upgrade ?
Thanks
> On Jan 6, 2015, at 2:34 AM, Bi,hongyu—mike wrote:
>
> sorry , forgot to attach the version: 0.94.15;
>
> and i call compa
yes, one of them (random) gets more connections than others.
9.3.1.1 Is OK.
I have 1 HConnection for logical module per application and each
ServletRequest gets it's own HTable. HTable closed each tme after
ServletRequest is done. HConnection is never closed.
2015-01-05 21:22 GMT+03:00 Ted Yu :
Forgot to mention, we are using hbase 0.94.15.
On Tue, Jan 6, 2015 at 7:32 PM, Shuai Lin wrote:
> Hi all,
>
> We have a hbase cluster of 5 region servers, each, each hosting 60+
> regions.
>
> But under heavy load the region servers crashes for OOME now and then:
>
> #
> # java.lang.OutOfMemoryE
Hi all,
We have a hbase cluster of 5 region servers, each, each hosting 60+
regions.
But under heavy load the region servers crashes for OOME now and then:
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
# Executing /bin/sh -c "kill -9 16820"...
We have m
sorry , forgot to attach the version: 0.94.15;
and i call compact (as well as many times flush region) from hbase shell
didn't take effect, no compaction happened;
2015-01-06 18:26 GMT+08:00 Bi,hongyu—mike :
> scan debug log:
> 15/01/06 18:20:56 DEBUG client.ClientScanner: Creating scanner over
scan debug log:
15/01/06 18:20:56 DEBUG client.ClientScanner: Creating scanner over T
starting at key 'Rowx'
15/01/06 18:20:56 DEBUG client.ClientScanner: Advancing internal scanner to
startKey at 'Rowx'
15/01/06 18:20:56 DEBUG client.MetaScanner: Scanning .META. starting at
row= for max=10 row
write traffic is ok:
2015-01-06 17:46:01,127 WARN org.apache.hadoop.hbase.ipc.SecureServer:
(responseTooSlow): {"processingtimems":68,"call":"multi(Region=Rx of 149
actions and first row key= Rowx), rpc version=1, client version=29,
methodsFingerPrint=-1105746420","client":"IP:port}
scan on that r
Hi all,
There's one region which can take write request but scan;
If I scan on that region I'll get scanner lease timeout(60s by
default),while I can scan other region of the same table and get the
result less than 10ms(our slow rpc threadhold is 10ms);
hbck report OK, and I use "hbase hfile" t
Phoenix currently isn't support with hbase 0.96.x.
Phoenix 2.x - HBase 0.94.x
Phoenix 3.x - HBase 0.94.x
Phoenix 4.x - HBase 0.98.1+
However, if you google around you'll find that some people have gotten
Phoenix 4.x working with 0.96.
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-us
dear all
Which one should i choose phoenix verison (base on hadoop 2.2.0 and hbase
0.96.2) ?Thank u very much!
Best regards!
jackiehbaseu...@126.com
33 matches
Mail list logo