As you know I am against this. The problems associated with the fact that two requests are _not_ identical just because they are for the same key are to large to make this worth it.
I am however adding the capability of "tapping into" the stream for a key if it is currently tunneling through the node when the data arrives though (I reffer to this as stream forking). Should the data tunnel fail, however, I restart all the requests that were currently tapped in, even though they are for the same data. I have not made it go to the first stream if it receives other DataReply-s while receiving a first, but rather I'm tunneling the latter ones independantly but only caching the first. The issue of which to cache (and the problem of the decided one going down you mention) is not an issue since it is the first to finish (correctly) that get's cached. The model of switching to the first might be wiser though, but it causes issues with how to close the other incoming streams in a friendly way. What would be ideal is if an upstream node could detect that it was already sending the data to a downstram one, and "refer" to that stream rather then starting a new one. That is not impossible either, but I haven't thought about it so there could be a rub. On Thu, 27 Jul 2000, you wrote: <snip> > Compromise? > Maybe every request for the same data should forward (to different nodes?), > but > everyone jumps onto the first tunnel to be created. Or maybe new request can > only join active tunnels that are being cached? > > Ideas please > > AGL > > -- > The difference between genius and stupidity is that genius has its limits. > ---------------------------------------- Content-Type: application/pgp-signature; name="unnamed" Content-Transfer-Encoding: 7bit Content-Description: ---------------------------------------- -- \oskar _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
