It routinely surprises me at how I can fetch a freesite one day, and
the next I have to wait for it to refetch the whole site again.
Particularly knowing how much of the freenet code base is dedicated to
caches!
Concerning remote fetches, I remember that Matthew said something to
the effect of: "general network performance is related to how easily
the data is found" (b/c then *much* network traffic is trying to get
to the 2-3 nodes which have the data we need).
I've been a out-of-the loop for a while, so if something here is now
technically wrong please correct it...
Theory:
? local/remote CHK fetches should be nearly 100% (mine are at 74% & 9%)
? most node's datastores "are not filling up" (long-running, my is at
40% of 50GB)
? a node's "cache" should:
- serve requests of the node (with the view to avoid remote requests)
- not reveal what the node "views" (i.e. don't service remotes from
the cache if far from network location)
- serve requests of the network (e.g. to move displaced data back
into the correct "store")
As best I can reason, either some part of the node's cache (as
described above) is *broken*, or we need to make inserts effect *many-
more* data stores.
One idea I had was to rather than depositing the datum in the 2-3
"closest" nodes, was to deposit the datum in every node within
"epsilon" of the target address (e.g. try to store it in an adjustable
pie-wedge sliver, than a constant number of nodes).
--
Robert Hailey
_______________________________________________
Devl mailing list
[email protected]
http://freenetproject.org/cgi-bin/mailman/listinfo/devl