On Friday 25 July 2003 03:06, Tom Kaitchuck wrote: > On Thursday 24 July 2003 06:58 pm, Gordan wrote: > > In that case let's limit it to 64 KB or 128 KB. It's still a cludge, but > > at least the damage is kept reasonably small. 1MB is just ridiculous. > > My understanding is that 1MB was picked, just because it was as large as > you could go in a single key. > > > > If it is done like I describe, how is there ANY more unnecessary > > > traffic than before? > > > > Active links from other sites? I don't necessarily want to go and see all > > the sites that have active links on another site, for example. And you > > are increasing the amount of traffic that active linking would generate, > > from just the active link to the active link plus the front page plus > > potentially more data. Imagine how much extra bandwidth you will waste > > every time somebody goes to visit one of the index sites. > > > > I know that you disagree, but this is not a sensible trade-off. > > No, I don't disagree. I previously stated that images that are intended to > be used as active links should under no circomstances be zipped.
OK, can you give an example of a Freesite (or create one that specifically suffers the problem that ZIP files would aim to solve, for the purpose of the demonstration) that would benefit from this approach in a demonstrable fashion? > > > The only thing that is new is that you would get all the HTML > > > for the site you are loading, and not just the first page when you > > > request it. > > > > Why? I don't necessarily want to see all of it. Why should I waste > > bandwidth for all of it rather than just for the page I want where I want > > it? I have to say that latency in Freenet is becoming good enough for > > this not to be an issue. > > Well if you don't load those pages, you only get the images that are shared > between pages, Which you are bound to see, and the HTML. (very small) And what happens to those of us who are interested only in textual rather than image content? Additionally, images are not indexable. If there is somebody out there who is working on an automated indexing robot, then this robot would put more strain on the network than necessary if it starts retrieving ZIP files with images in them because those images would not be used by it. Proper automated search facilities will eventually be required, and making them more difficult or more wasteful to implement now will only come to bite the network later. > > I am not at all convinced about the user experience argument. > > Additionally, a "huge page with links" will become nigh-on impossible to > > use sensibly if for each active link you have to download the whole > > linked site. See point above about index sites. > > If you don't zip active links this is a non issue. I think this should be extended at least to all images, rather than just active links, if we DO end up with having archives. HTML pages only, and limit the size to much smaller than 1 MB. > > I still don't think it is a worthwhile tradeoff. There will always be > > cases such as NIM that will make sure you cannot defeat the problem > > completely, and the gain just doesn't seem that great. Where is is really > > deemed necessary, manual pre-caching using IFRAME tags and automated > > pre-caching using specifically designed software for the purpose are IMHO > > a better solution. Using archives simply isn't the correct tool for the > > job, as far as I can see. It is a very ugly exception to the way Freenet > > is supposed to work. > > Well, things like NIMs wouldn't be zipped. MOST of the content on Freenet > SHOULD NOT be zipped, and if it is done right it won't be. However there > are places where it could help. Could you explain exactly what you mean > when you say "manual pre-caching" and "automated pre-caching". Manual pre-caching would be using IFRAME tags to pre-load pages/files that the page links to, so that while you are reading the current page, the pages you are likely to go to are already pre-caching. Automated pre-caching would be using pre-caching software than automatically follows all links from the current page and puts them in your browser cache. The latter, although far less than ideal, would IMO still be better than implementing archives on the node level. > > That doesn't really call for a ZIP at all. You can just upload each file > > by the CHK, but in that case, you could make an edition type site that > > you could use for the "skins". > > Yes that would work, but it would be slow with high latency. Even if the > latency were as low as the WWW it would not really be good enough to do > this sort of thing. Having zips would allow you to have dosens possibly > hundreds of VERY SMALL images on a site as part of a theme. OK, for the first time there is an aspect of this that I might actually be able to swallow as vaguely sensible. But again, the same problems are there: 1) There would be no way to automatically decide what should be in an archive. 2) Allowing it to be done manually would allow for "abuse" of the network by bandwidth wasting 3) Doing it manually would likely be beyond most people's abilities to do properly. It is a bit like shooting yourself in the foot with a rocket launcher. While this use (skins) would probably be OK, it would still be likely to create more problems than it solves, and good CSS is probably a better way to go than graphic heavy skins. And besides, latency with parallel downloads is not really an issue, as the latency for each download is suffered simultaneously, not in series. Gordan _______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
