In my case Centos 7 is using QEMU 1.5.3 ... which is *ancient*. This is on
a node with a packstack install of OpenStack. If you have a different
result, I would like to know why...
Got a bit further in my reading and testing. Also got my raw volume read
performance in an instance from ~300MB/s
Just a heads-up that the 3.10 kernel in CentOS/RHEL is *not* a stock 3.10
kernel. It has had many things backported from later kernels, though they may
not have backported the specific improvements you're looking for.
I think CentOS is using qemu 2.3, which is pretty new. Not sure how new
Should add that the physical host of the moment is Centos 7 with a
packstack install of OpenStack. The instance is Ubuntu Trusty. Centos 7 has
a relatively old 3.10 Linux kernel.
>From the last week (or so) of digging, I found there were substantial
claimed improvements in *both* flash support in
On 03/03/2016 01:13 PM, Preston L. Bannister wrote:
> Scanning the same volume from within the instance still gets the same
> ~450MB/s that I saw before.
Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.
Measuring iSCSI in isolation is next on my list. Both on
Note that my end goal is to benchmark an application that runs in an
instance that does primarily large sequential full-volume-reads.
On this path I ran into unexpectedly poor performance within the instance.
If this is a common characteristic of OpenStack, then this becomes a
question of concern
Hi Preston,
> The benchmark scripts are in:
>
> https://github.com/pbannister/openstack-bootstrap
in case that might help, here are a few notes and hints about doing
benchmarks for the DRDB block device driver:
http://blogs.linbit.com/p/897/benchmarking-drbd/
Perhaps there's something
First, my degree from school is in Physics. So I know something about
designing experiments. :)
The benchmark scripts runs "dd" 218 times, against different volumes (of
differing sizes), with differing "bs". Measures are collected both from the
physical host, and from the within the instance.
On 03/01/2016 04:29 PM, Preston L. Bannister wrote:
Running "dd" in the physical host against the Cinder-allocated volumes
nets ~1.2GB/s (roughly in line with expectations for the striped flash
volume).
Running "dd" in an instance against the same volume (now attached to the
instance) got
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present