FWIW, here is what I have for my ceph cluster:

4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times 
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives

I'm currently running 79 qemu/kvm vm's for Indiana University and xsede.org.

On May 7, 2013, at 7:50 AM, "Barry O'Rourke" <Barry.O'rou...@ed.ac.uk> wrote:

> Hi,
> 
> I'm looking to purchase a production cluster of 3 Dell Poweredge R515's which 
> I intend to run in 3 x replication. I've opted for the following 
> configuration;
> 
> 2 x 6 core processors
> 32Gb RAM
> H700 controller (1Gb cache)
> 2 x SAS OS disks (in RAID1)
> 2 x 1Gb ethernet (bonded for cluster network)
> 2 x 1Gb ethernet (bonded for client network)
> 
> and either 4 x 2Tb nearline SAS OSDs or 8 x 1Tb nearline SAS OSDs.
> 
> At the moment I'm undecided on the OSDs, although I'm swaying towards the 
> second option at the moment as it would give me more flexibility and the 
> option of using some of the disks as journals.
> 
> I'm intending to use this cluster to host the images for ~100 virtual 
> machines, which will run on different hardware most likely be managed by 
> OpenNebula.
> 
> I'd be interested to hear from anyone running a similar configuration with a 
> similar use case, especially people who have spent some time benchmarking a 
> similar configuration and still have a copy of the results.
> 
> I'd also welcome any comments or critique on the above specification. 
> Purchases have to be made via Dell and 10Gb ethernet is out of the question 
> at the moment.
> 
> Cheers,
> 
> Barry
> 
> 
> -- 
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to