On 4/8/2014 7:05 AM, Darren Birkett wrote:
Hi Ian,
Unless you're going to use SSD drives in your cinder-volume nodes, why
do you expect to get any better performance out of this setup, versus
a ceph cluster? If anything, performance would be worse since at
least ceph has the ability to stripe access across many nodes, and
therefore many more disks, per volume.
Last I looked ceph's i/o was 1/num replicas: you could get performance
or redundancy. And with only one node I think ceph cluster will be
"degraded" forever. Plus you may need fedora 37 with 3.42 kernel and/or
inktank's custom build of libvirt on your openstack nodes to actually
use it.
I'd like to go ceph, too, but ATM it looks like I'll stick to lvm on a
big raid box and maybe play with swift in my copious free time.
Dima
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack