Hi,
# virsh blkdeviotune Test vdb --write_iops_sec 50 //file block device
# virsh blkdeviotune Test vda --write_iops_sec 50 //rbd block device
error: Unable to change block I/O throttle
error: invalid argument: No device found for specified path
2012-04-03 07:38:49.170+: 30171: debug :
Hi,
Op 3-4-2012 10:02, Andrey Korolyov schreef:
Hi,
# virsh blkdeviotune Test vdb --write_iops_sec 50 //file block device
# virsh blkdeviotune Test vda --write_iops_sec 50 //rbd block device
error: Unable to change block I/O throttle
error: invalid argument: No device found for specified path
But I am able to set static limits in the config for rbd :) All I want
is a change on-the-fly.
It is NOT cgroups mechanism, but completely qemu-driven.
On Tue, Apr 3, 2012 at 12:21 PM, Wido den Hollander w...@widodh.nl wrote:
Hi,
Op 3-4-2012 10:02, Andrey Korolyov schreef:
Hi,
# virsh
Op 3-4-2012 10:28, Andrey Korolyov schreef:
But I am able to set static limits in the config for rbd :) All I want
is a change on-the-fly.
It is NOT cgroups mechanism, but completely qemu-driven.
Are you sure about that?
http://libvirt.org/formatdomain.html#elementsBlockTuning
Browsing
At least, elements under iotune block applies to rbd and you can
test it by running fio, for example. I have suggested by reading
libvirt code that blkdeviotune call can be applied to pseudo-devices,
there are no counterpoints in code that shows different behavior for
iotune and blkdeviotune calls
Suggested hack works, seems that libvirt devs does not remove block
limitation as they count this feature as experimental, or forgot about
it.
On Tue, Apr 3, 2012 at 12:55 PM, Andrey Korolyov and...@xdel.ru wrote:
At least, elements under iotune block applies to rbd and you can
test it by
Yes, Stefan. You are right. I'm not sure about the D state, but high
cpu usage is fact.
I do want to try an OSD per disk configuration but a bit later.
Thanks,
Vladimir.
2012/4/3 Stefan Kleijkers ste...@unilogicnetworks.net:
Hello,
A while back I had the same errors you are seeing. I had
Hello Vladimir,
Well in that case you could try BTRFS. With BTRFS it's possible to grab
all the disks in a node together in a RAID0/RAID1/RAID10 configuration.
So you can run one or a few OSDs per node. But I would recommend the
newest kernel possible. I haven't tried with the 3.3 range, but