On Thursday 24 July 2003 06:58 pm, Gordan wrote:
> In that case let's limit it to 64 KB or 128 KB. It's still a cludge, but at
> least the damage is kept reasonably small. 1MB is just ridiculous.

My understanding is that 1MB was picked, just because it was as large as you 
could go in a single key.

> > If it is done like I describe, how is there ANY more unnecessary traffic
> > than before?
>
> Active links from other sites? I don't necessarily want to go and see all
> the sites that have active links on another site, for example. And you are
> increasing the amount of traffic that active linking would generate, from
> just the active link to the active link plus the front page plus
> potentially more data. Imagine how much extra bandwidth you will waste
> every time somebody goes to visit one of the index sites.
>
> I know that you disagree, but this is not a sensible trade-off.

No, I don't disagree. I previously stated that images that are intended to be 
used as active links should under no circomstances be zipped.

> > The only thing that is new is that you would get all the HTML
> > for the site you are loading, and not just the first page when you
> > request it.
>
> Why? I don't necessarily want to see all of it. Why should I waste
> bandwidth for all of it rather than just for the page I want where I want
> it? I have to say that latency in Freenet is becoming good enough for this
> not to be an issue.

Well if you don't load those pages, you only get the images that are shared 
between pages, Which you are bound to see, and the HTML. (very small)

> I am not at all convinced about the user experience argument. Additionally,
> a "huge page with links" will become nigh-on impossible to use sensibly if
> for each active link you have to download the whole linked site. See point
> above about index sites.

If you don't zip active links this is a non issue.

> I still don't think it is a worthwhile tradeoff. There will always be cases
> such as NIM that will make sure you cannot defeat the problem completely,
> and the gain just doesn't seem that great. Where is is really deemed
> necessary, manual pre-caching using IFRAME tags and automated pre-caching
> using specifically designed software for the purpose are IMHO a better
> solution. Using archives simply isn't the correct tool for the job, as far
> as I can see. It is a very ugly exception to the way Freenet is supposed to
> work.

Well, things like NIMs wouldn't be zipped. MOST of the content on Freenet 
SHOULD NOT be zipped, and if it is done right it won't be. However there are 
places where it could help. Could you explain exactly what you mean when you 
say "manual pre-caching" and "automated pre-caching".

> That doesn't really call for a ZIP at all. You can just upload each file by
> the CHK, but in that case, you could make an edition type site that you
> could use for the "skins".

Yes that would work, but it would be slow with high latency. Even if the 
latency were as low as the WWW it would not really be good enough to do this 
sort of thing. Having zips would allow you to have dosens possibly hundreds 
of VERY SMALL images on a site as part of a theme.
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to