On Fri, Jul 25, 2003 at 11:13:00PM +0100, Gordan wrote:
> On Friday 25 July 2003 19:24, Toad wrote:
> 
> > > This has all been solved before. If the solution is pre-caching, then
> > > there are better ways to achieve the solution than making each download 1
> > > MB. That is just ridiculous. Instead, it would probably be better to
> > > implement a limited depth web crawler in fproxy that would download
> > > things up to 1 or 2 hops away from a page being visited, with a limit of
> > > how many downloads to do simultaneously.
> > >
> > > That way, it can still be handled in the node, it will help site
> > > propagation, and it will speed things up. And best of all, it will not
> > > require the horrible, horrible cludge of using archives to transfer
> > > entire sites.
> >
> > It would be a lot more code than the containers code is. If you want to
> > implement it, go right ahead; we will pick holes in your code but
> > eventually it would probably be accepted.
> 
> I may just do that. I've been planning to get stuck into Freenet code for 
> quite a while now.
> 
> > > Purely client side solutions already exist. I am sure that I saw a piece
> > > of software years ago that interfaces with IE and tries to pre-cache
> > > things for you, so that when you click on a link, the chances are that
> > > the next page is already cached.
> >
> > Precaching is certainly possible; it is made easier by the fact that we
> > can determine the size of the file from the key, before downloading it.
> > However it will use a lot of download threads, and we certainly do not
> > want all nodes to be using all their spare capacity when idle to
> > prefetch data, because it would produce a vast network load. So
> > firstly, we need to deal with pooling requests, so that it does less
> > prefetch when a splitfile download is in progress, for example;
> > secondly, it would need to run nonblocking, but that's relatively easy;
> > thirdly, we would need to set some parameters to limit the maximum load
> > caused by it even when the node is idle. I am sure you can think of
> > other issues to solve. And I don't regard it as a priority, I am more
> > concerned with fixing routing :)
> 
> Fair enough. Speaking of routing, I used to have a very consistently "green" 
> routing table a few months back. Nowdays, it is doing well if there are even 
> 2-3 green entries in it, they are all red. Has there been a recent 
> development that could explain that?

Slashdot, perhaps?
> 
> Gordan

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to