Hi Ian,
On Wed, Jun 06, 2001 at 10:47:44AM -0700, Ian Clarke wrote:
> Great, I think that this warrents our 0.3.9.2 release.
>
> I am tempted to reccomend that we incorporate Oskar's proposal for
> probabilistic caching of documents (higher probability closer to the
> origin of the request) which should have a beneficial effect on document
> longevity.
I think may be a good idea. I don't know the details, and would love a
better description. I agree that decreasing caching will help document
lifetimes.
I have a suggestion for a "simple" fix that would provide a guaranteed fix
to a certain level, and it is not probabilistic.
Decide on a percentage, say 75%, and do this:
75% of space on a node is for Inserted data only.
25% of the space is for cached (request through this node) data only.
Keep tabs on how often each file is requested from the inserted data
store. When you get an insert, turf out the least popular files to make
space.
Let requests affect the request cache normally, provided the Insert
datastore doesn't already have a copy.
This would mean that cache synchronising attacks like the one I mentioned in
an earlier email would at worst case decrease the capacity of freenet by a
maximum of 25%. Personally I see the 25% cache as a performance enhancer
anyway, not a data store.
I would welcome any criticism of this idea.
> Of course, in an ideal world we would be able to verify this through
> simulation before trying it out in real life, but nobody seems
> interested in doing simulations these days, and I suspect it would never
> happen if simulation was a precondition to implementation.
>
> What are people's thoughts on this?
I like David's idea of running many virtual nodes on one machine. It would
be nice to build a script that takes a freenet tar and builds a several
thousand node network for simulation. I would be very interested in being
involved in something like this, and it would be great to get an idea what
sort of hardware would be required for this. It may just give me the excuse
I need to buy that Dual-CPU Athlon machine I've been dreaming about. :-)
This would allow us to fiddle with the source and see how it affects things.
Writing a simulation from scratch feels like a waste of time - you are just
doing what someone else has already done, in a way guaranteed to never to
used by most people. Also, a seemingly tiny difference between the
simulation and the real implementation could conceivably change the
performance of the network drastically.
I guess maybe we should simulate the network and run many nodes rather than
simulate many nodes on a network. :-)
I would love an envrionment where I could fiddle with the freenet code (and
learn Java) and break it continuously without affecting anyone else. :-)
Cya,
Ray
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/devl