Hi Ian,

Unless you're going to use SSD drives in your cinder-volume nodes, why do
you expect to get any better performance out of this setup, versus a ceph
cluster?  If anything, performance would be worse since at least ceph has
the ability to stripe access across many nodes, and therefore many more
disks, per volume.

- Darren


On 8 April 2014 12:55, Ian Marshall <i...@itlhosting.co.uk> wrote:

> Hi All
>
> I am considering storage nodes for my small production deployment. I have
> rejected Ceph as I cant get confidence that performance will be Ok without
> SSD drives.
>
> I need to be able to boot from block storage, do live migrations and
> create snapshots which could be used to create new instances. From the
> documentation, all this is feasible with LVM volumes. Ideally I wanted to
> use unified storage so I can have block and object on same node.
>
> What I would like to know from those using LVM storage nodes is the
> preferred set-up as I need a minimum 6Tb block storage and wonder whether I
> could use local cinder-volumes on each compute node and a central swift
> storage server for 'cinder backups',
>
> Networkiis all 10gbe.
>
> Can I share these volumes across my compute nodes or is it better to only
> use local volume on each nodes for running instances from block storage on
> that node.
>
> Overall I am expecting to require about 80-100 concurrent instances [VMs]
> across two compute nodes. along with this will be multiple controller nodes.
>
>
>
> Regards
> Ian
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to