You don't need a dedicated disk for performance purposes if:

You use an SSD (which the decent ones have sub 0.1ms write latency)
You have a battery backed cache RAID card with write-back cache enabled so that 
synchronous writes return nearly instantly.

Its just the synchronous write latency for write performance.  If Linux with 
ext3 and the default mount options, this can be really bad if shared with other 
uses.  Additionally a separate disk won't have to seek as much if only one app 
is writing to it.

Given the price of a raid card, and the fact that the logs don't need that much 
space, I'd go for the ~$370 Intel 32GB X-25E SSD if you are performance 

On 7/17/09 2:26 PM, "Benjamin Reed" <> wrote:

you need a dedicated disk for the logDir, but not the dataDir. the
reason is that the write to the log is in the critical path: we cannot
commit changes until they have been synced to disk, so we want to make
sure that we don't contend for the disk. the snapshots in the dataDir
are done in an asynchronous thread, so it can be written to a disk used
by other applications (usually the OS disk).

the problem with running other processes on the zookeeper server is
similar to the disk contention: if zookeeper starts contending with a
runaway app for CPU and memory, we can start timing out because of

by using a dedicated disk for logs and a dedicated machine you can get
very high deterministic performance.

make sense?


Jonathan Gray wrote:
> Thanks for the input.
> Honestly, I'm thinking I need to have separate clusters.  The version of
> ZK is one thing; but also for an application like HBase, we have had
> periods where we needed to patch ZK before it became part of a release.
>   Keeping track of that on a shared cluster will be tricky, if not
> impossible.
> And with a small development team and a very fast dev cycle, I'm a
> little concerned about a runaway application hosing all the other
> dependencies on ZK...
> What are the actual reasons for wanting a separate disk for ZK?
> Strictly reliability purposes?  Should that disk be dedicated to the
> logDir but not the dataDir, or both?
> If I don't give it a dedicated disk or node, but it has 1GB of memory
> and a core, what are the downsides?  Are they just about reliability?
> If I could run 5 or 7 zk nodes, but co-hosted with my HBase cluster, is
> that really less reliable than 3 separate nodes, as long as the jvm has
> sufficient resources?  Or are there performance or usability concerns as
> well?
> Sorry for all the questions, just trying to get the story straight so
> that we don't spread misinformation to HBase users.  Most users start
> out on very small clusters, so dedicated ZK nodes are not a realistic
> assumption... How big of a deal is that?
> JG
> Benjamin Reed wrote:
>> we designed zk to have high performance so that it can be shared by
>> multiple applications. the main thing is that you use dedicated zk
>> machines (with a dedicated disk for logging). once you have that in
>> place, watch the load on your cluster, as long as you aren't saturating
>> the cluster you should share.
>> as you point out running multiple clusters is a hardware investment,
>> plus you miss out on opportunities to improve reliability. for example,
>> if you have three applications that have a cluster of 3 zk servers each,
>> one failure will result in an outage. if instead of using the 9 servers
>> you have the same three applications use a zk cluster with 7 servers you
>> can tolerate three failures without an outage.
>> the key of course is to make sure that you don't oversubscribe the server.
>> ben
>> Jonathan Gray wrote:
>>> Hey guys,
>>> Been using ZK indirectly for a few months now in the HBase and Katta
>>> realms.  Both of these applications make it really easy so you don't
>>> have to be involved much with managing your ZK cluster to support it.
>>> I'm now using ZK for a bunch of things internally, so now I'm manually
>>> configuring, starting, and managing a cluster.
>>> What advice is there about whether I should be sharing a single
>>> cluster between all my applications, or running separate ones for each
>>> use?
>>> I've been told that it's strongly recommended to run your ZK nodes
>>> separately from the application using them (this is actually what
>>> we're telling new users over in HBase, though a majority of
>>> installations will likely co-host them with DataNodes and RegionServers).
>>> I don't have the resources to maintain a separate 3+ node ZK cluster
>>> for each of my applications, so this is not really an option.  I'm
>>> trying to decide if I should have HBase running/managing it's own ZK
>>> cluster that is co-located with some of the regionservers (there will
>>> be ample memory, but ZK will not have a dedicated disk), or if I
>>> should be pointing it to a dedicated 3 node ZK cluster.
>>> I would then also have Katta pointing at this same shared cluster (or
>>> a separate cluster would be co-located with katta nodes).  Same for my
>>> application; could share nodes with the app servers or pointed at a
>>> single ZK cluster.
>>> Trade-offs I should be aware of?  Current best practices?
>>> Any help would be much appreciated.  Thanks.
>>> Jonathan Gray

Reply via email to