you need a dedicated disk for the logDir, but not the dataDir. the reason is that the write to the log is in the critical path: we cannot commit changes until they have been synced to disk, so we want to make sure that we don't contend for the disk. the snapshots in the dataDir are done in an asynchronous thread, so it can be written to a disk used by other applications (usually the OS disk).

the problem with running other processes on the zookeeper server is similar to the disk contention: if zookeeper starts contending with a runaway app for CPU and memory, we can start timing out because of starvation.

by using a dedicated disk for logs and a dedicated machine you can get very high deterministic performance.

make sense?

ben

Jonathan Gray wrote:
Thanks for the input.

Honestly, I'm thinking I need to have separate clusters. The version of ZK is one thing; but also for an application like HBase, we have had periods where we needed to patch ZK before it became part of a release. Keeping track of that on a shared cluster will be tricky, if not impossible.

And with a small development team and a very fast dev cycle, I'm a little concerned about a runaway application hosing all the other dependencies on ZK...

What are the actual reasons for wanting a separate disk for ZK? Strictly reliability purposes? Should that disk be dedicated to the logDir but not the dataDir, or both?

If I don't give it a dedicated disk or node, but it has 1GB of memory and a core, what are the downsides? Are they just about reliability? If I could run 5 or 7 zk nodes, but co-hosted with my HBase cluster, is that really less reliable than 3 separate nodes, as long as the jvm has sufficient resources? Or are there performance or usability concerns as well?

Sorry for all the questions, just trying to get the story straight so that we don't spread misinformation to HBase users. Most users start out on very small clusters, so dedicated ZK nodes are not a realistic assumption... How big of a deal is that?

JG

Benjamin Reed wrote:
we designed zk to have high performance so that it can be shared by multiple applications. the main thing is that you use dedicated zk machines (with a dedicated disk for logging). once you have that in place, watch the load on your cluster, as long as you aren't saturating the cluster you should share.

as you point out running multiple clusters is a hardware investment, plus you miss out on opportunities to improve reliability. for example, if you have three applications that have a cluster of 3 zk servers each, one failure will result in an outage. if instead of using the 9 servers you have the same three applications use a zk cluster with 7 servers you can tolerate three failures without an outage.

the key of course is to make sure that you don't oversubscribe the server.

ben

Jonathan Gray wrote:
Hey guys,

Been using ZK indirectly for a few months now in the HBase and Katta realms. Both of these applications make it really easy so you don't have to be involved much with managing your ZK cluster to support it.

I'm now using ZK for a bunch of things internally, so now I'm manually configuring, starting, and managing a cluster.

What advice is there about whether I should be sharing a single cluster between all my applications, or running separate ones for each use?

I've been told that it's strongly recommended to run your ZK nodes separately from the application using them (this is actually what we're telling new users over in HBase, though a majority of installations will likely co-host them with DataNodes and RegionServers).

I don't have the resources to maintain a separate 3+ node ZK cluster for each of my applications, so this is not really an option. I'm trying to decide if I should have HBase running/managing it's own ZK cluster that is co-located with some of the regionservers (there will be ample memory, but ZK will not have a dedicated disk), or if I should be pointing it to a dedicated 3 node ZK cluster.

I would then also have Katta pointing at this same shared cluster (or a separate cluster would be co-located with katta nodes). Same for my application; could share nodes with the app servers or pointed at a single ZK cluster.

Trade-offs I should be aware of?  Current best practices?

Any help would be much appreciated.  Thanks.

Jonathan Gray

Reply via email to