Hi,

I'm not looking at your hardware in detail (except to say that you absolutely 
must have 3 monitors and that I don't know what a load balancer would be useful 
for in this setup), but perhaps the two parameters below may help you evaluate 
your system.

To estimate the IOPS capacity of your cluster, consider that (in the worst 
case, and assuming 3 replicas) each VM write is going to be amplified into 6 
disk writes across the OSDs. That's 3 replicas * (one write to the journal plus 
one to the PG). I say this is the worst case because if you enable rbd write 
back caching, which you probably should, then contiguous writes from a VM can 
be merged. And reads will usually be cached either in the VM or on the OSD 
servers, so I don't know how to include those in this number. But anyway, to 
get to the approximate IOPS capacity you can use # disks * IOPS per disk 
divided by 6. Also note that any IOPS needed for scrubbing or backfilling would 
need to be accounted within that number.

The other main parameter to consider is  the minimum write latency. In our 
cluster with no SSDs, a small synchronous write takes ~30-40ms in the best 
case, when the cluster is idle (but we have Hitachi Eco drives, not 10k SAS). 
Under our normal load, with ~200 VMs generating ~500-1000IOPS on average, the 
latency is around 45-50ms (but our disks are still pretty idle at that rate). 
And if an unthrottled VM runs an IO intensive process with say 3000 IOPS, that 
can increase the cluster latency to 100ms or so.

I've only just started testing SSDs, but it seems that using an SSD journal can 
improve the small synchronous write latency to ~10-12ms, and a pool using SSDs 
for both journals _and_ data can achieve ~5ms.

Hope that helps,

Dan

On Apr 5, 2014 9:44 AM, Ian Marshall <i...@itlhosting.co.uk> wrote:
>
> Hi All
>
> I am struggling to gain information relating to whether Ceph without SSD 
> drives will give sufficient performance in my planned infrastructure refresh 
> using Openstack. I was keen to go with Ceph, with its support in Openstack 
> and Ubuntu, but it has been suggested that a SAN solution would provide 
> better performance. Unfortunate, since I have a limited budget, I cannot 
> consider the Ceph Enterprise route at present.
>
> Planned infrastructure -
> 2 x Hardware load balancer
> 2 x Controller nodes [would run a Ceph MON on each]
> -- Dual CPU, 32Gb RAM, 4 x 600gb SAS 10k drives
> 2 x Compute nodes [ could run a Ceph MON on one of these]
> -- Dual CPU, 256gb RAM, 4 x 600gb SAS 10k drives
> EITHER SAN or if Ceph, the storage servers would be
> 2 x R70xd with dual CPU, 64Gb RAM, 24 x 600gb SAS 10k drives, each drive as 
> RAID0 with write back cache on controller.
> each of these drives would have a partition for the journals.
>
> Network is all 10gbe
>
> I have a couple of Dell 2950 with only 1gb network ports and could consider 
> these for the Ceph MONs.
>
> This setup would need to be able to run about 80-100 VMs running web based 
> applications, each is stateless but boot from block storage and each would be 
> using 2-4gb RAM and 20Gb volumes.
>
> I have read lots fo information on the internet and raised questions on 
> various forums and have seen positive and negative feedback on Ceph 
> performance without SSDs. This has confused me as I am unsure whether using 
> Ceph is appropriate for my requirements. 
>
> NOTE, I would be willing to reduce quantity of drives on the storage server - 
> say 8-12 x 1TB as also read that performance can be better with lower 
> quantity of drives per host.
>
>
>
> Regards
> Ian
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to