My vision of a well built node is when number of journal disks is equal to 
number of data disks. You definitely don't want to lose 3 journals at once in 
case of single drive failure.

> 06 мая 2014 г., в 18:07, Xabier Elkano <[email protected]> написал(а):
> 
> 
> Hi,
> 
> I'm designing a new ceph pool with new hardware and I would like to
> receive some suggestion.
> I want to use a replica count of 3 in the pool and the idea is to buy 3
> new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
> in mind two configurations:
> 
> 1- With journal in SSDs
> 
> OS: 2xSSD intel SC3500 100G Raid 1
> Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
> OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total size
> for OSDs: 5,4TB
> 
> 2- With journal in a partition in the spinners.
> 
> OS: 2xSSD intel SC3500 100G Raid 1
> OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process and
> its journal. Total size for OSDs: 3,6TB
> 
> The budget in both configuration is similar, but the total capacity not.
> What would be the best configuration from the point of view of
> performance? In the second configuration I know the controller write
> back cache could be very critical, the servers has a LSI 3108 controller
> with 2GB Cache. I have to plan this storage as a KVM image backend and
> the goal is the performance over the capacity.
> 
> On the other hand, with these new hardware, what would be the best
> choice: create a new pool in an existing cluster or create a complete
> new cluster? Are there any advantages in creating and maintaining an
> isolated new cluster?
> 
> thanks in advance,
> Xabier
> 
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to