Thanks John - combining with the existing effort seems like the right thing to do (I've reached out to Claxton to coordinate). Great to see that the larger issues around quotas / write-once have already been agreed.
So I propose that sharing will work in the same way, but some values are visible across all instances in the project. I do not think it would be appropriate for all entries to be shared this way. A few options: 1) A separate endpoint for shared values 2) Keys are shared iff e.g. they start with a prefix, like 'peers_XXX' 3) Keys are set the same way, but a 'shared' parameter can be passed, either as a query parameter or in the JSON. I like option #3 the best, but feedback is welcome. I think I will have to store the value using a system_metadata entry per shared key. I think this avoids issues with concurrent writes, and also makes it easier to have more advanced sharing policies (e.g. when we have hierarchical projects) Thank you to everyone for helping me get to what IMHO is a much better solution than the one I started with! Justin On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt <[email protected]> wrote: > On 27 January 2014 14:52, Justin Santa Barbara <[email protected]> wrote: >> Day, Phil wrote: >> >>> >>> >> We already have a mechanism now where an instance can push metadata as >>> >> a way of Windows instances sharing their passwords - so maybe this >>> >> could >>> >> build on that somehow - for example each instance pushes the data its >>> >> willing to share with other instances owned by the same tenant ? >>> > >>> > I do like that and think it would be very cool, but it is much more >>> > complex to >>> > implement I think. >>> >>> I don't think its that complicated - just needs one extra attribute stored >>> per instance (for example into instance_system_metadata) which allows the >>> instance to be included in the list >> >> >> Ah - OK, I think I better understand what you're proposing, and I do like >> it. The hardest bit of having the metadata store be full read/write would >> be defining what is and is not allowed (rate-limits, size-limits, etc). I >> worry that you end up with a new key-value store, and with per-instance >> credentials. That would be a separate discussion: this blueprint is trying >> to provide a focused replacement for multicast discovery for the cloud. >> >> But: thank you for reminding me about the Windows password though... It may >> provide a reasonable model: >> >> We would have a new endpoint, say 'discovery'. An instance can POST a >> single string value to the endpoint. A GET on the endpoint will return any >> values posted by all instances in the same project. >> >> One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of >> value only. >> >> I expect most instances will just post their IPs, but I expect other uses >> will be found. >> >> If I provided a patch that worked in this way, would you/others be on-board? > > I like that idea. Seems like a good compromise. I have added my review > comments to the blueprint. > > We have this related blueprints going on, setting metadata on a > particular server, rather than a group: > https://blueprints.launchpad.net/nova/+spec/metadata-service-callbacks > > It is limiting things using the existing Quota on metadata updates. > > It would be good to agree a similar format between the two. > > John > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
