On 12/03/2013 11:40 AM, John Dickinson wrote:
On Dec 3, 2013, at 8:05 AM, Jay Pipes <[email protected]> wrote:
On 12/03/2013 10:04 AM, John Dickinson wrote:
How are you proposing that this integrate with Swift's account and container
quotas (especially since there may be hundreds of thousands of accounts and
millions (billions?) of containers in a single Swift cluster)? A centralized
lookup for quotas doesn't really seem to be a scalable solution.
From reading below, it does not look like a centralized lookup is what the
design is. A push-change strategy is what is described, where the quota numbers
themselves are stored in a canonical location in Keystone, but when those
numbers are changed, Keystone would send a notification of that change to
subscribing services such as Swift, which would presumably have one or more
levels of caching for things like account and container quotas...
Yes, I get that, and there are already methods in Swift to support that. The trick,
though, is either (1) storing all the canonical info in Keystone and scaling that or (2)
storing some "boiled down" version, if possible, and fanning that out to all of
the resources in Swift. Both are difficult and require storing the information in the
central Keystone store.
The storage driver for quotas in Keystone could use something like
Cassandra as its data store, leaving the Keystone endpoint stateless and
only responsible for relaying the update message to subscribers.
Each "type" of thing Keystone manages -- identity, token, catalog, etc
-- can have a different storage driver. Adding a new storage driver for
Cassandra and its ilk would be pretty trivial. That way Keystone folks
can focus on the job at hand (notifying subscribers of updates to
quotas) and Cassandra developers can focus on scaling data storage and
retrieval.
Best,
-jay
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev