> I think Doug's suggestion of keeping the schema files in-tree and pushing > them to a well-known tarball maker in a build step is best so far. > > It's still a little clunky, but not as clunky as having to sync two repos.
Yes, I tend to agree. So just to confirm that my understanding lines up: * the tarball would be used by the consumer-side for unit tests and limited functional tests (where the emitter service is not running) * the tarball would be also be used by the consumer-side in DSVM-based CI and in a full production deployments (where the emitter service is running) * the tarballs will be versioned, with old versions remaining accessible (as per the current practice with released source on tarballs.openstack.org) * the consumer side will know which version of each schema it expects to support, and will download the appropriate tarball at runtime * the emitter side will signal the schema version that's it actually using, via say a well-known field in the notification body * the consumer will reject notification payloads with a mismatched major version to what it's expecting to support > >[snip] > >> >> d. Should we make separate distro packages? Install to a well known > >> >> location all the time? This would work for local dev and integration > >> >> testing and we could fall back on B and C for production distribution. > >> >> Of > >> >> course, this will likely require people to add a new distro repo. Is > >> >> that > >> >> a concern? > >> > >> >Quick clarification ... when you say "distro packages", do you mean > >> >Linux-distro-specific package formats such as .rpm or .deb? > >> > >> Yep. > > >So that would indeed work, but just to sound a small note of caution > >that keeping an oft-changing package (assumption #5) up-to-date for > >fedora20/21 & epel6/7, or precise/trusty, would involve some work. > > >I don't know much about the Debian/Ubuntu packaging pipeline, in > >particular how it could be automated. > > >But in my small experience of Fedora/EL packaging, the process is > >somewhat resistant to many fine-grained updates. > > Ah, good to know. So, if we go with the tarball approach, we should be able > to avoid this. And it allows the service to easily service up the schema > using their existing REST API. I'm not clear on how servicing up the schema via an existing API would avoid the co-ordination issue identified in the original option (b)? Would that API just be a very simple proxying in front of the well-known source of these tarballs? For production deployments, is it likely that some shops will not want to require access to an external site such as tarballs.openstack.org? So in that case, would we require that they mirror, or just assume that downstream packagers will bundle the appropriate schema versions with the packages for the emitter and consumer services? Cheers, Eoghan > Should we proceed under the assumption we'll push to a tarball in a > post-build step? It could change if we find it's too messy. > > -S > > _______________________________________________ > OpenStack-dev mailing list > OpenStackfirstname.lastname@example.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStackemail@example.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev