What is the point in having a "stable" release with such a fundamental 
problem?

Matthew Toseland wrote:
> It is well established that there is a problem with 0.5 routing. What
> relevance does this have to anything?
> 
> On Thu, Dec 01, 2005 at 09:30:01AM +0000, Gordan Bobic wrote:
> 
>>Ian Clarke wrote:
>>
>>>On 30/11/05, Gordan Bobic <gordan at bobich.net> wrote:
>>>
>>>
>>>>Matthew Toseland wrote:
>>>>
>>>>
>>>>>Umm, please read the presentation on 0.7. Specializations are simply
>>>>>fixed numbers in 0.7.  The problem with probabilistic caching according
>>>>>to specialization is that we need to deal with both very small networks
>>>>>and very large networks.  How do we sort this out?
>>>>
>>>>It's quite simple - on smaller networks, the specialisation of the node
>>>>will be wider. You use a mean and standard deviation of the current
>>>>store distribution. If the standard deviation is large, you make it more
>>>>likely to cache things further away.
>>>
>>>
>>>You are proposing a fix to a problem before we have even determined
>>>whether a problem exists.  I am not currently aware of any evidence
>>>that simple LRU provides inadequate specialization, or that we need to
>>>enforce specialization in this way.
>>>
>>>In other words: If its not broken, don't fix it (words every software
>>>engineer should live by).
>>
>>Having just put two nodes up, one with unlimited bandwidth (well, 
>>100Mb/s) one with less, and seeing both of them sit at the maximum 
>>bandwidth set or maximum CPU usage, whichever runs out first, tells me 
>>that there likely is a problem.
>>
>>It seems obvious to me that without specialisation there can be no 
>>routing other than random/flooding - and I am not seeing particularly 
>>pronounced specialisation. The only reason it _seems_ to work is because 
>>popular content gets caches on most nodes.
>>
>>Gordan

Reply via email to