Hi Brian,

You are missing the spirit of the proposed multiple hashing scheme in terms of 
the hash table storage scalability it can offer - this is independent of the 
quality of the hash functions. Scalability is key - for e.g. there could be 
millions of flows out of which only a small percentage is long-lived large 
flows which you would like to learn.

With a single hash, your hash table size would be proportionate to the total 
number of flows to minimize false positives; with multiple hashes this could be 
reduced to a number proportional to the number of long-lived large flows - the 
exact number would depend on the use cases and traffic patterns.

As I mentioned before, this scheme is similar to a bloom filter (which uses 
multiple hash functions to substantially reduce the amount of storage) - please 
see below
http://en.wikipedia.org/wiki/Bloom_filter

I will also add some these details to an Appendix section in the draft. If 
needed, I would be glad to have a conference call to discuss further on this 
topic.

Thanks,
ramki

-----Original Message-----
From: Brian E Carpenter [mailto:[email protected]] 
Sent: Wednesday, January 16, 2013 12:18 AM
To: Melinda Shore
Cc: ramki Krishnan; 
[email protected]; [email protected]
Subject: Re: [OPSAWG] I-D Action: 
draft-krishnan-opsawg-large-flow-load-balancing-02.txt

On 16/01/2013 08:02, Melinda Shore wrote:
> On 1/15/13 10:55 PM, Brian E Carpenter wrote:
>> "Let us assume the probability of a short-lived flow matching a 
>> single hash table entry is .25"
> 
>> That would be a very poor hash function. 
> 
> That would be a disturbingly bad hash function.  But let me ask this:
> do you think the model would be workable with a better hash function?

The model works. I just don't see the point of running several inferior hash 
functions instead of one good one.

   Brian
_______________________________________________
OPSAWG mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/opsawg

Reply via email to