On 03/01/2016 04:29 PM, Preston L. Bannister wrote:

Running "dd" in the physical host against the Cinder-allocated volumes
nets ~1.2GB/s (roughly in line with expectations for the striped flash
volume).

Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of
the raw host volume numbers, or better.) Upping read-ahead in the
instance via "hdparm" boosted throughput to ~450MB/s. Much better, but
still sad.

In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose some performance, but not more than
half!

Note that as this is an all-in-one OpenStack node, iSCSI is strictly
local and not crossing a network. (I did not want network latency or
throughput to be a concern with this first measure.)

Well, not crossing a physical network :) You will be however likely crossing the loopback network on the node.

What sort of per-CPU utilizations do you see when running the test to the instance? Also, out of curiosity, what block size are you using in dd? I wonder how well that "maps" to what iSCSI will be doing.

rick jones
http://www.netperf.org/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to