>  4 drives for data and 1 drive for commitlog, 
How are you configuring the drives ? It's normally best to present one big data 
volume, e.g. using raid 0, and put the commit log on say the system mirror.

> will the node balance out the load on the drives, or is it agnostic to usage 
> of drives underlying data directories?
It will not. 
There is a feature coming in v1.2 to add better support for JBOD 
configurations. 

A word of warning. If you put more than 300GB to 400GB per node you may end 
experience some issues such as repair, compaction or disaster recovery taking a 
long time. These are simply soft limits that provide a good rule of thumb for 
HDD based systems with 1 GigE networking.   

Hope that helps. 
-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 15/09/2012, at 7:39 AM, Casey Deccio <ca...@deccio.net> wrote:

> I'm building a new "cluster" (to replace the broken setup I've written about 
> in previous posts) that will consist of only two nodes.  I understand that 
> I'll be sacrificing high availability of writes if one of the nodes goes 
> down, and I'm okay with that.  I'm more interested in maintaining high 
> consistency and high read availability.  So I've decided to use a write-level 
> consistency of ALL and read-level consistency of ONE.
> 
> My first question is about the drives in this setup.  If I initially set up 
> the system with, say, 4 drives for data and 1 drive for commitlog, and later 
> I decide to add more capacity to the node by adding more drives for data 
> (adding the new data directory entries in cassandra.yaml), will the node 
> balance out the load on the drives, or is it agnostic to usage of drives 
> underlying data directories?
> 
> My second question has to do with RAID striping.  Would it be more useful to 
> stripe the disk with the commitlog or the disks with the data?  Of course, 
> with a single striped volume for data directories, it would be more difficult 
> to add capacity to the node later, as I've suggested above.
> 
> Casey

Reply via email to