Morning everyone, I've been working on a performance testing tool for TripleO hardware provisioning operations off and on for about a year now and I've been using it to try and collect more detailed data about how TripleO performs in scale and production use cases. Perhaps more importantly YODA (Yet Openstack Deployment Tool, Another) automates the task enough that days of deployment testing is a set it and forget it operation.
You can find my testing tool here [0] and the test report [1] has links to raw data and visualization. Just scroll down, click the capcha and click "go to kibana". I still need to port that machine from my own solution over to search guard. If you have too much email to consider clicking links I'll copy the results summary here. TripleO inspection workflows have seen massive improvements from Newton with a failure rate for 50 nodes with the default workflow falling from 100% to <15%. Using patches slated for Pike that spurious failure rate reaches zero. Overcloud deployments show a significant improvement of deployment speed in HA and stack update tests. Ironic deployments in the overcloud allow the use of Ironic for bare metal scale out alongside more traditional VM compute. Considering a single conductor starts to struggle around 300 nodes it will be difficult to push a multi conductor setup to it's limits. Finally Ironic node cleaning, shows a similar failure rate to inspection and will require similar attention in TripleO workflows to become painless. [0] https://review.openstack.org/#/c/384530/ [1] https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/ Thanks for your time! - Justin __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev