On Mon, Jul 6, 2009 at 12:58 PM, Gustavo Niemeyer <gust...@niemeyer.net>wrote:

> > can make the ZK servers appear a bit less connected.  You have to plan
> for
> > ConnectionLoss events.
>
> Interesting.


Note that most of these seem to be related to client issues, especially GC.
If you configure in such a way as to get long pauses, you will see
connection loss.  The default configuration for ZK is for a pretty short (5
seconds) timeout that is pretty easy to exceed with out-of-the-box GC params
on the client side.


> > c) for highest reliability, I switched to large instances.  On
> reflection, I
>  > think that was helpful, but less important than I thought at the time.
>
> Besides the fact that there are more resources for ZooKeeper, this
> likely helps as well because it reduces the number of systems
> competing for the real hardware.


Yes, but I think that this is less significant than I expected.  Small
instances have pretty dedicated access to their core.  Disk contention is a
bit of an issue, but not much.


> > d) increasing and decreasing cluster size is nearly painless and is
> easily
>  > scriptable.  To decrease, do a rolling update on the survivors to update
> (...)
>
> Quite interesting indeed.  I guess the work that Henry is pushing on
> these couple of JIRA tickets will greatly facilitate this.


Absolutely.  I was still very surprised at how small the pain is in the
current world.


> Do you have any kind of performance data about how much load ZK can
> take under this environment?


Only barely.  Partly with an eye toward system diagnostics, and partly to
cause ZK to have something to do, I reported a wide swath of data available
from /proc into ZK every few seconds for all of my servers.  This lead to a
few dozen transactions per second and ultimately helped me discover and
understand some of the connection issues for clients.

ZK seemed pretty darned stable through all of this.

The only instability that I saw was caused by excessive amounts of data in
ZK itself.  As I neared the (small) amount of memory I had allocated for Zk
use, I would see servers go into paroxysms of GC, but the cluster
functionality was impaired to a very surprisingly small degree.

Have you tried to put the log and snapshot files under EBS?


No.  I considered it, but I wanted fewer moving parts rather than more.

Doing that would make the intricate and unlikely failure mode that Henry
asked about even less likely, but I don't know if it would increase or
decrease the probability of any kind of failure.

The observed failure modes for ZK in EC2 were completely dominated by our
(my) own failings (such as letting too much data accumulate).

Reply via email to