Here are some preliminary results from the simulator - I must stress that they're only preliminary. I haven't simulated token passing yet - these results only show throttling with backoff, throttling alone, and backoff alone.
The load model is a bit simplistic: one in ten nodes is a publisher, and each publisher has ten randomly selected readers. Each publisher occasionally inserts a key, waits for ten minutes, then informs its readers of the key; the readers then request the key. The publication rate (and therefore the request rate) can be varied to investigate the effect of load. Each run lasted for three hours' simulation time, with the first hour's logs discarded to minimise the effect of the initial conditions. All three mechanisms showed an increase in throughput under increasing load, ie there was no congestion collapse. Throttling alone produced higher throughput than either throttling with backoff or backoff alone, especially under heavy load. All three mechanisms showed a decrease in success rate with increasing load, suggesting that congestion collapse might eventually occur at high enough loads. Throttling alone produced a higher success rate and slower degradation under load than either throttling with backoff or backoff alone. This suggests that the backoff mechanism is not effective in controlling load, and the request throttle would work better without backoff. These conclusions are only tentative though - much more remains to be done, when I can find enough disk space for the logs! Cheers, Michael -------------- next part -------------- A non-text attachment was scrubbed... Name: throughput.png Type: image/png Size: 4275 bytes Desc: not available URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20061122/f69172fe/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: success-rate.png Type: image/png Size: 4186 bytes Desc: not available URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20061122/f69172fe/attachment-0001.png>