On Fri, 16 Dec 2005 22:00:43 +0100, Dennis Olsson wrote:

> Hi all you guys (and gals?)!
> 
> 
> First:
> 
> It has come to my ...frustration... that a webpage consisting of images
> which can't be found are annoying to fetch from freenet. The problem is
> (as most of you probably know) that the fproxy returns a HTML-page with a
> metadata-updater to refresh the data after a certain interval. This is not
> parsed when read into a img-tag in HTML, and therefore it seems as the
> data is corrupt.
> 
> This approach of using a webpage for retrying is _very_ informative
> (well.. could be better than it is, but still better than just "404").
> 
> 
> Second:
> 
> Fetching large amounts of images (galleries) or other data (TFE is my
> first thought) is slow of two causes.
> 
> 1) When the number of connections the browser makes to the fproxy at once
> is low, just a few keys will be retrieved and decoded at once. This is
> ineffective.
>
> 2) When using a big number of connections instead makes the transfer slow,
> since there's more sockets to keep track of and more threads to feed with
> CPU-cycles for the fproxy/node and the browser.

A simple solution is to have FProxy parse the HTML and identify links
(which it must do anyway to filter images loaded from the Web and
whatever) and add images and other page requisites to the download queue
without the browser needing to ask each of them separately. Still won't
help for getting multiple pages at once, but at least getting an
image-heavy page becomes faster. This would also combat the effect which
makes other than the topmost images drop off the network since no one has
the patience to wait for them.

> Now... here is a suggestion of how to change both of these.
> 
> Use the "Redirect"-header of the HTTP-protocol. A simple algorithm:
> 
> 1) For a connection C, check if data is being downloaded.
>       If not, download the data/add download to queue
> 2) If data not retrieved in 30 seconds, set Redirect to the same URL
>       and close socket. Continue to get data in background.
> 
> Now, let's assume that the webbrowser uses a maximum of one connection
> and round robin to alternate between requests. Then only one connection
> to the fproxy is used at once and all images get an equal amount of
> time.
> 
> If, however, the webbrowser just queues the requests and bangs until the
> first are done until it gives up, the difference will be that new
> connections for new retries will be made every 30 seconds. So; no bigger
> loss.
> 
> 
> The main downside is that a browser maybe never gets done loading the
> data if the key is lost forever. How should this be solved? A countup
> after the URL the rdirection goes to, so after a set number of retries,
> the statuspage shows instead?
> 
> Also, for htmlpages, it's better to use the statuspage (but with more
> information). Some kind of auto-sensening maybe? but this is a pain for
> download managers.
> 
> Maybe some statuscode in the HTTP-tags here as well, so the download
> manager tries one more time.

Why would anyone want to use a HTTP download manager on Freenet ? It does
you absolutely no good. Get FUQID and use that.

> Suggestions?
> 
> It was just a instant idea followed by some brainstorming.
> 
> // Dennis



Reply via email to