> From: Quentin Hartman <[email protected]> > When running on a ceph-backed volume, I get closer to 15MB/s using the same > tests, and have as much as 50% iowait. Typical operations that take seconds > on bare metal take tens of seconds, or minutes in a VM. This problem > actually drove me to look at things with strace, and I'm finding streams of > FSYNC and PSELECT6 timeouts while the processes are running. More direct > tests of ceph performance are able to saturate the nic, pushing about > 90MB/s. I have ganglia installed on the host machines, and when I am > running tests from within a vm ,the network throughput seems to be getting > artificially capped. Rather than the more "spiky" graph produced by the > direct ceph tests, I get a perfectly flat horizontal line at 10 or 20MB/s.
Is there some sort of virtio for networking? It sounds like the guest's driver thinks you've got a low-speed networking card. I don't see any options in the qemu command to control your networking configuration. Dale
