Hey all, I have been curious about impact of providing performance feedback as part of the review process. From what I understand, keystone used to have a performance job that would run against proposed patches (I've only heard about it so someone else will have to keep me honest about its timeframe), but it sounds like it wasn't valued.
I think revisiting this topic is valuable, but it raises a series of questions. Initially it probably only makes sense to test a reasonable set of defaults. What do we want these defaults to be? Should they be determined by DevStack, openstack-ansible, or something else? What does the performance test criteria look like and where does it live? Does it just consist of running tempest? >From a contributor and reviewer perspective, it would be nice to have the ability to compare performance results across patch sets. I understand that keeping all performance results for every patch for an extended period of time is unrealistic. Maybe we take a daily performance snapshot against master and use that to map performance patterns over time? Have any other projects implemented a similar workflow? I'm open to suggestions and discussions because I can't imagine there aren't other folks out there interested in this type of pre-merge data points. Thanks! Lance
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev