There are already a number of configurable parameters that are set 
somewhat haphazardly in Freenet, and probabilistic caching will probably 
bring one or two more of them.

Rather than just guessing what might be the best values, what if the 
Freenet network collectively experimented and determined what they 
should be?

The idea would be that nodes in the network would constantly evaluate
the overall performance of their neighbors (average response times etc).
 Embedded in their IDs, nodes will have a compact description of their
settings (randomly chosen at startup, and periodically modified at
random too).  After a while, the top 30% (yet another arbitrary value!)
can then be averaged, and the user's node can reset its own values to
those.  

Thus, the network would effectively be collectively conducting a genetic 
optomization of these parameters.  If, down the line, this proves a 
security risk, we can always remove this functionality having used the 
network to establish some good parameters anyway.

Thoughts?

Ian.

-- 
Ian Clarke                                                  ian at locut.us
Coordinator, The Freenet Project              http://freenetproject.org/
Founder, Locutus                                        http://locut.us/
Personal Homepage                                   http://locut.us/ian/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20030328/9e527202/attachment.pgp>

Reply via email to