I see what you are saying, that's interesting. As Mahadev mentioned on
another thread, we'd be interested to look at the JNI you've done.
Perhaps we will need to include this as an option (and document where
it's necessary, etc...) for other users who might run into this.
Thanks much for bringing this to our attention! Providing a fix:
Joey Echeverria wrote:
Nitay is correct about the native threads. Using the pure Java API,
the garbage collector will occasionally pause other Java threads to do
a full mark and sweep. Even switching to the concurrent collector only
delays the problem. The issues is mixing a high throughput application
(HBase) with a low latency library (Zookeeper). Systems like HBase
live on relatively large numbers of short lived objects. You only key
keys and values long enough for the Memcache to get full then you
write all the data to HDFS and throw away the objects.
You can patch around the issue with object pools, but ultimately you
need to insulate zk from the GC pauses. In our experience, the best
way to do that was a jni wrapper around the zk C api. Since the C api
uses it's own posix threads, it's protected from the GC. In the system
we wrote, we ended up using the Java api with a large session timeout
for most everything, and used the jni code just for creating ephemeral
On Wed, Apr 8, 2009 at 9:35 PM, Nitay <nit...@gmail.com> wrote:
The default session timeout in HBase is currently 10 seconds. Bumping it up
to 30 and 60 reduced SessionExpired exceptions, according to Andrew. I
believe Andrew did run it under jconsole. He was also tuning GC parameters.
He mentioned running using incremental garbage collector
(-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode). He can provide more
details on all of this.
My understanding with HBASE-1316 is that it solves the problem because the
ZooKeeper IO/hearbeat thread becomes an OS level thread which is not managed
by Java. Hence, the GC does not starve it. Joey can comment here as he
developed the solution.
There are three main components that use ZooKeeper in HBase are: client,
regionserver, and master.
The client does not have ephemeral nodes so having something like
ZOOKEEPER-321 for it would be nice. It is currently read only. For now
recovering it by reinitializing the ZooKeeper handle is not a big deal.
The bigger issue is with the master and regionserver, which do use ephemeral
nodes. Recovering them is a bit tougher, and we'd like to prevent getting
SessionExpired as much as possible.
On Wed, Apr 8, 2009 at 1:17 PM, Patrick Hunt <ph...@apache.org> wrote:
What are you running for a session timeout on your clients?
Can you run with something like jvisualvm or jconsole, and watch the gc
activity when the session timeouts occur? Might give you some insight.
Have you tried one of the alternative GC's available in the VM?
ie "Flags for Latency Applications"
We are also working on the following jira:
which will eliminate session expirations for clients w/o ephemerals. (is
this the case for you?)
Try turning on debug in your client, the client will spit out:
LOG.debug("Got ping response for sessionid:0x"
If you turn on trace logging in the server you should see session updates
there as well (c->server, which control session expiration).
re HBASE-1316 - how does the jni c wrapper fix this? Isn't the code still
running w/in the same (vm) process?
Unfortunately I can't think of anything else if it is the GC. Basically
you'd have to increase the timeout or try another gc with lower latency.
Perhaps Mahadev/Ben/Flavio might have insight...
We've recently replaced a few pieces of HBase's cluster management and
coordination with ZooKeeper. One of guys, Andrew Purtell, has a cluster
he throws a lot of load at. Andrew's cluster was getting a lot of
SessionExpired events which were causing some havoc. After some discussion
on the hbase list and additional testing by Andrew (tweaking things like
session timeout, quorum size, and GC used), we suspect the problem is that
the Java GC is starving the ZooKeeper hearbeat thread from executing.
There is a JIRA open on the matter where Joey suggests a solution that has
worked for him:
We wanted to loop you guys in to see if you have any thoughts/suggestions