Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-09 Thread Preston L. Bannister
In my case Centos 7 is using QEMU 1.5.3 ... which is *ancient*. This is on a node with a packstack install of OpenStack. If you have a different result, I would like to know why... Got a bit further in my reading and testing. Also got my raw volume read performance in an instance from ~300MB/s

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-07 Thread Chris Friesen
Just a heads-up that the 3.10 kernel in CentOS/RHEL is *not* a stock 3.10 kernel. It has had many things backported from later kernels, though they may not have backported the specific improvements you're looking for. I think CentOS is using qemu 2.3, which is pretty new. Not sure how new

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-06 Thread Preston L. Bannister
Should add that the physical host of the moment is Centos 7 with a packstack install of OpenStack. The instance is Ubuntu Trusty. Centos 7 has a relatively old 3.10 Linux kernel. >From the last week (or so) of digging, I found there were substantial claimed improvements in *both* flash support in

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-03 Thread Chris Friesen
On 03/03/2016 01:13 PM, Preston L. Bannister wrote: > Scanning the same volume from within the instance still gets the same > ~450MB/s that I saw before. Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation. Measuring iSCSI in isolation is next on my list. Both on

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-03 Thread Preston L. Bannister
Note that my end goal is to benchmark an application that runs in an instance that does primarily large sequential full-volume-reads. On this path I ran into unexpectedly poor performance within the instance. If this is a common characteristic of OpenStack, then this becomes a question of concern

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-02 Thread Philipp Marek
Hi Preston, > The benchmark scripts are in: > > https://github.com/pbannister/openstack-bootstrap in case that might help, here are a few notes and hints about doing benchmarks for the DRDB block device driver: http://blogs.linbit.com/p/897/benchmarking-drbd/ Perhaps there's something

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-02 Thread Preston L. Bannister
First, my degree from school is in Physics. So I know something about designing experiments. :) The benchmark scripts runs "dd" 218 times, against different volumes (of differing sizes), with differing "bs". Measures are collected both from the physical host, and from the within the instance.

Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-01 Thread Rick Jones
On 03/01/2016 04:29 PM, Preston L. Bannister wrote: Running "dd" in the physical host against the Cinder-allocated volumes nets ~1.2GB/s (roughly in line with expectations for the striped flash volume). Running "dd" in an instance against the same volume (now attached to the instance) got

[openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-01 Thread Preston L. Bannister
I have need to benchmark volume-read performance of an application running in an instance, assuming extremely fast storage. To simulate fast storage, I have an AIO install of OpenStack, with local flash disks. Cinder LVM volumes are striped across three flash drives (what I have in the present