On Wed, 04 Jan 2006 18:26:27 +0000, Matthew Toseland wrote:

> On Sat, Dec 24, 2005 at 10:40:20AM +0200, Jusa Saari wrote:
>> On Sat, 24 Dec 2005 00:17:30 +0100,
>> freenetwork at web.de wrote:
>> 
>> >>Please explain why this theory is wrong ?
>> > 
>> > So essentially you're asking if FProxy could spider whole sites
>> > recursively (or only for one or X levels of depth) in the background
>> > every time the user hits a site... ?
>> 
>> No, I want FProxy to retrieve all the images pointed to by the "img"
>> tags in the page. There is no recursion there, since no HTML files are
>> loaded in this matter.
>> 
>> I just want image galleries to be usefull without having to resort to
>> FUQID. As is, they aren't.
> 
> Even with your browser using 24-36 connections? In 0.5, fproxy will allow
> between 24 and 36 parallel fproxy connections, and we recommend that
> browsers are set up to use many connections.

Sorry it took so long to respond, been busy with other things.

If I set my browser to use 24 connections, what will happen when I try to
connect to a regular web site ? I consider myself a pretty technical
person, but I have no idea how to set Firefox to use 24 connections on
localhost and 2 elsewhere...

Better not give recommendations that have the chance of earning Freenet
project bad reputation.

> I can see the argument for prefetching images... It's probably not
> something I will implement before 0.7 though...
>> 
>> > If yes, the impact on the network would be interesting to see when
>> > fetching TFE or any other index site... If the network's well
>> > structured it won't break upon the millions of requests...
>> > otherwise... :)
>> 
>> Of course it will break in your scenario. It will be impossible to
>> distinguish between often-retrieved and never-retrieved content, so the
>> caching facilities won't work properly, and the network will grind to a
>> screeching halt as it gets hopelessly overloaded, making it impossible
>> to retrieve anything.
> 
> :)
> 
> Don't you think everyone will have a 50GB download queue anyway for Big
> Files? In which case prefetching fproxy content might actually be the best
> thing to do?

Hmm... If we can differentiate between "things that can be shown in the
browser" (images and HTML) and everything else, why not ? Just make sure
to not start infinite recursing by accident - only add links from a page
to the queue when that page is actually shown in the browser.

And of course things added in this way should get priority over large
downloads. Reserve half the queue for them, and the other half for
downloading Big Files ; that way those downloads still proceed, even when
the user is browsing a lot, and on the other hand, having downloads
running in the background doesn't block browsing.

Oh, and you might add a page in the Web Interface, that shows the links
for the websites that are currently available without delay.

Of course all this means that anyone who can get his hands on the machine
is going to be able to figure out where you've been surfing; but then
again, that is impossible to prevent anyway.


Reply via email to