Hello, Yujun.

The statements below are correct.  For actual development of StorPerf, even a 
DevStack instance (with Cinder and Heat) would be sufficient.  Just don't try 
to run more than 1 agent VM :)

For the generate-environment.sh and the environment variables collected by 
daily.sh, most of these are used for reporting to the OPNFV Test Results DB.  
For the purpose of integration, none of these are needed.  And, yes, as 
StorPerf itself runs fully as a client of OpenStack, it has no way of knowing 
what the storage backend is.

As noted in the etherpad, there are values (such as the Cinder driver: Ceph, or 
the number of Ceph nodes, etc) that you would want to capture to make the 
Storage QPI more meaningful.  You will also want to use the number of Ceph 
nodes to determine how many agent VMs to run.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106<tel:1-613-314-8106>
mark.bei...@dell.com<mailto:mark.bei...@dell.com>

On Jun 20, 2017, at 23:30, Yujun Zhang (ZTE) 
<zhangyujun+...@gmail.com<mailto:zhangyujun+...@gmail.com>> wrote:

Hi, Mark

Following up the project breakout session[1] on summit, we have kicked off the 
integration of storperf testing for storage QPI.

The first question we have is the minimum requirements for running storperf for 
development purpose. From my understanding, it seems to be a regular openstack 
cloud will be enough, plus the following resources

  *   Ubuntu 16.04 image in Glance
  *   StorPerf flavor in Nova (2 CPU, 8GB RAM 4 GB disk)

Other configuration seems not relevant to testing itself, e.g.

  *   The storage backend is not visible to `fio` since it tests against 
`/dev/vdb` by default, is that so?
  *   generate-environment.sh seems to collect installer, network and cinder 
configuration but it seems not used by testing. Is it just for reporting?

[1]: https://etherpad.opnfv.org/p/qtip-storperf

--
Yujun Zhang

_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to