Ceph is slow as a hell by itself, and qemu/libvirt/nova adds qcow2 on
top of that.
Try to run it with raw volumes, at least. And check if you have properly
configured network (jumbo frames, no overloaded switch tables, etc).
On 06/12/2015 09:49 AM, pushpesh sharma wrote:
Hi list,
I need some expert opinion on some problem I am facing with
OpenStack+Ceph environment.
I running a 3+ node cluster with OpenStack Juno. It is using ceph RBDs
as cinder volumes. Functionally setup is working fine. However my
expectation are related to block performance of RBDs inside a VM.
I am observing a throttling effect in 100%- 4K RR iops after a point.
Lots of CPU
cycles got wasted in iowait. I suspect a single iothread per VM is
causing this.I came to know about few tuning parameter that libvirt
provides. It looks pretty straight forward to set these parameter
in domain xml . However I am not able to do so. I didn't find any way
to pass these
parameter form nova directly, and when I edit domain.xml directly
using 'virsh edit' the changes vanish even after saving the xml
properly.(I know it is not a clean way to this, but a hack)
It could be validation problem:-
############
#virsh dumpxml instance-000000c5 > vm.xml
#virt-xml-validate vm.xml
Relax-NG validity error : Extra element cpu in interleave
vm.xml:1: element domain: Relax-NG validity error : Element domain
failed to validate content
vm.xml fails to validate
##################
Second approach I took was to setting QoS in volumes types. But there
is no option to set iothreads per volume, there are parameters related
to max_read/wrirte ops/bytes.
Thirdly, editing Nova flavor and proving extra specs like
hw:cpu_socket/thread/core, can change guest CPU topology however again
no way to set iothread. It does accept hw_disk_iothreads(no type check
in place, i believe ), but can not pass the same in domain.xml.
Please suggest me a way to set the same.
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators