Hi, So recently we started this experiment with the compute and qa programs to try using Gerrit to review blueprints. Launchpad is deficient in this area, and while we hope Storyboard will deal with it much better, but it's not ready yet.
As a development organization, OpenStack scales by adopting common tools and processes, and true to form, we now have a lot of other projects that would like to join the "experiment". At some point that stops being an experiment and becomes practice. However, at this very early point, we haven't settled on answers to some really basic questions about how this process should work. Before we extend it to more projects, I think we need to establish a modicum of commonality that helps us integrate it with our tooling at scale, and just as importantly, helps new contributors and people who are working on multiple projects have a better experience. I'd like to hold off on creating any new specs repos until we have at least the following questions answered: a) Should the specs repos be sphinx documents? b) Should the follow the Project Testing Interface[1]? c) Some basic agreement on what information is encoded? eg: don't encode implementation status (it should be in launchpad) do encode branches (as directories? as ...?) d) Workflow process -- what are the steps to create a new spec and make sure it also exists and is tracked correctly in launchpad? -Jim [1] https://wiki.openstack.org/wiki/ProjectTestingInterface _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev