On Thu, Jul 24, 2003 at 05:41:19PM +0100, Gordan wrote:
> On Thursday 24 July 2003 15:41, Michael Schierl wrote:
> > Gordan schrieb:
> > > What happens when the same
> > > files are linked from multiple pages, e.g. active links?
> >
> > Adding active links to other .zip manifests is simply broken. They
> > should show that the content is still there - so bundle them with the
> > html file.
> 
> With each HTML file?
> 
> How long, exactly, would you expect a large site with links to a lot of other 
> sites to load if it has to download a complete 1 MB archive for each active 
> link it has in it? It would take forever. I do not believe that is plausible.
> 
> > IMO containers are a better approach than creating huge sites (like TFE
> > or nubile) or using "images" linking to HTML for preloading sites - or
> > providing a compressed version separately (like TFEE), which can hardly
> > be retrieved.
> 
> This has all been solved before. If the solution is pre-caching, then there 
> are better ways to achieve the solution than making each download 1 MB. That 
> is just ridiculous. Instead, it would probably be better to implement a 
> limited depth web crawler in fproxy that would download things up to 1 or 2 
> hops away from a page being visited, with a limit of how many downloads to do 
> simultaneously.
> 
> That way, it can still be handled in the node, it will help site propagation, 
> and it will speed things up. And best of all, it will not require the 
> horrible, horrible cludge of using archives to transfer entire sites.

It would be a lot more code than the containers code is. If you want to
implement it, go right ahead; we will pick holes in your code but
eventually it would probably be accepted.
> 
> Purely client side solutions already exist. I am sure that I saw a piece of 
> software years ago that interfaces with IE and tries to pre-cache things for 
> you, so that when you click on a link, the chances are that the next page is 
> already cached.

Precaching is certainly possible; it is made easier by the fact that we
can determine the size of the file from the key, before downloading it.
However it will use a lot of download threads, and we certainly do not
want all nodes to be using all their spare capacity when idle to
prefetch data, because it would produce a vast network load. So
firstly, we need to deal with pooling requests, so that it does less
prefetch when a splitfile download is in progress, for example;
secondly, it would need to run nonblocking, but that's relatively easy;
thirdly, we would need to set some parameters to limit the maximum load
caused by it even when the node is idle. I am sure you can think of
other issues to solve. And I don't regard it as a priority, I am more
concerned with fixing routing :)
> 
> Gordan

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to