On 09/04/2014 11:24 PM, Robert Collins wrote:
On 4 September 2014 23:42, Nejc Saje <ns...@redhat.com> wrote:

On 09/04/2014 11:51 AM, Robert Collins wrote:

It doesn't contain that term precisely, but it does talk about replicating
the buckets. What about using a descriptive name for this parameter, like
'distribution_quality', where the higher the value, higher the distribution
evenness (and higher memory usage)?

I've no objection talking about keys, but 'node' is an API object in
Ironic, so I'd rather we talk about hosts - or make it something
clearly not node like 'bucket' (which the 1997 paper talks about in
describing consistent hash functions).

So proposal:
   - key - a stringifyable thing to be mapped to buckets

What about using the term 'item' from the original paper as well?

Sure. Item it is.

   - bucket a worker/store that wants keys mapped to it
   - replicas - number of buckets a single key wants to be mapped to

Can we keep this as an Ironic-internal parameter? Because it doesn't really
affect the hash ring. If you want multiple buckets for your item, you just
continue your journey along the ring and keep returning new buckets. Check
out how the pypi lib does it:

That generator API is pretty bad IMO - because it means you're very
heavily dependent on gc and refcount behaviour to keep things clean -
and there isn't (IMO) a use case for walking the entire ring from the
perspective of an item. Whats the concern with having replicas a part
of the API?

Because they don't really make sense conceptually. Hash ring itself doesn't actually 'make' any replicas. The replicas parameter in the current Ironic implementation is used solely to limit the amount of buckets returned. Conceptually, that seems to me the same as take(<replicas>, iterate_nodes()). I don't know python internals enough to know what problems this would cause though, can you please clarify?

   - partitions - number of total divisions of the hash space (power of
2 required)

I don't think there are any divisions of the hash space in the correct
implementation, are there? I think that in the current Ironic implementation
this tweaks the distribution quality, just like 'replicas' parameter in
Ceilo implementation.

its absolutely a partition of the hash space - each spot we hash a
bucket onto is thats how consistent hashing works at all :)

Yes, but you don't assign the number of partitions beforehand, it depends on the number of buckets. What you do assign is the amount of times you hash a single bucket onto the ring, which is currently named 'replicas' in Ceilometer code, but I suggested 'distribution_quality' or something similarly descriptive in an earlier e-mail.



OpenStack-dev mailing list

Reply via email to