Matthew Toseland wrote:
> Well, a small world network has a low diameter almost by definition ...

Exactly - MAX_DEPTH is set to 10, which is probably higher than the
diameter of the network, to make sure the success rate is very close to
100%, because success rate isn't what I'm trying to measure.

> you're sure it won't skew the results?

Depends on what you're trying to measure. I'm interested in (1) how many
nodes an effectively unlimited search visits before finding the data,
and (2) the length of the return path. If you also want to measure how
well a particular HTL scheme approximates an unlimited search then I
think you might need to run a separate set of simulations, otherwise you
won't be able to separate failures-due-to-HTL-scheme from
failures-due-to-caching-scheme.

> Could you make the proposed change and re-run 
> and see if it makes any difference to the outcome? (I'd expect more hops, 
> more failures, so a more pronounced difference??)

Sorry, I've lost access to those 50 PCs so I doubt I'll get it done any
time soon.

>>> Also, do you use the request rate code?
>> No, there are no bandwidth limits in these simulations and the network
>> only handles one request at a time - I had to strip out as much as
>> possible to be able to simulate more than 100 nodes.
> 
> Okay so it's just left-over code.

Ah, sorry, I see what you mean now - I thought you were talking about
throttling.

The request rate of a key represents its popularity. Requests for each
key are generated by a Poisson process with rate proportional to the
key's popularity, and each request runs to completion before the next
request starts.

Cheers,
Michael

Reply via email to