On Friday 25 July 2003 21:11, fish wrote: > On Fri, Jul 25, 2003 at 08:05:39AM +0100, Gordan wrote: > > On Friday 25 July 2003 03:06, Tom Kaitchuck wrote: > > > On Thursday 24 July 2003 06:58 pm, Gordan wrote: > > > > In that case let's limit it to 64 KB or 128 KB. It's still a cludge, > > > > but at least the damage is kept reasonably small. 1MB is just > > > > ridiculous. > > > > > > My understanding is that 1MB was picked, just because it was as large > > > as you could go in a single key. > > Someone picks an arbitary value that someone else dissagrees with. News at > 11. It is highly likely, btw, that my modem is as slow as or slower than > yours. (hey, i thought you were supposed to go bigger in these contests > :-p)
This is not about competing over anything, in any way. It is about making sure that the arbitrarily picked value is at least remotely in touch with reality. > > > > I know that you disagree, but this is not a sensible trade-off. > > > > > > No, I don't disagree. I previously stated that images that are intended > > > to be used as active links should under no circomstances be zipped. > > > > OK, can you give an example of a Freesite (or create one that > > specifically suffers the problem that ZIP files would aim to solve, for > > the purpose of the demonstration) that would benefit from this approach > > in a demonstrable fashion? > > Fishland 2.0 would have, by virte of having a lot of small html keys (over > 200keys of 1k or less, and over 500keys total). Although Fishland|3 d > oens't as much, it also would benefit from containers in the same way. Are you saying that your main page will load 200 keys on it's own? Even bundling 200 * 1KB is still 200KB. Quite large for a small node. > Colours is completly pointless unless his HTML and CSS are together, > unless the only colour in your world is white/grey. The problems start happening when all your HTML + CSS don't fit in a single archive. If a single CSS is used to skin the entire site, it becomes impossible to separate files in a good way. > TFE's usability would significantly benefit from users not getting stuck > on that damn warning page, if the warning page and thelist.html were in a > single container. OK, but that is blatantly a case for an IFRAME or IMG pre-cache trick, as it is only 1-page ahead pre-caching. > Keeping lots of small HTML files refreshed is a hellish experience, and one > that I look forward to avoiding in the future. (yes, yes, i know, > incorrect use of freenet, should modify content to fit the network, not > vice versa, blah blah blah. I've heard it before, and let me assure you > that the answer is foad. :) It is a bad idea on the web, not only in Freenet. A lot of tiny files implies something not quite right with your web design in most cases. IMHO, the design techniques should not need to be changed between developing sites for the www and for Freenet. > > And what happens to those of us who are interested only in textual rather > > than image content? > > You contine using lynx/links/w3m as always? The piece of perspective that > you are missing, is that on the sites that you want to visit, the images > are a tiny part of the size of the site anyhow. Never underestimate the > bloat of HTML :). Indeed, I do not underestimate it. Whatever happened to people who properly hand code their HTML, and strictly follow the rules? Am I really the only one who checks his HTML for strict XHTML compliance? I despair. The whole DreamWeaver induced brain rot is so depressing... And so widespread... > > Additionally, images are not indexable. If there is somebody out there > > who is working on an automated indexing robot, then this robot would put > > more strain on the network than necessary if it starts retrieving ZIP > > files with images in them because those images would not be used by it. > > http://images.google.com. You don't seem to understand how images are indexed by google. They are indexed according the textual content of their alt parameters, and the text in the href tags that point to the images. The images themselves are never even downloaded by the indexing robots. > > Manual pre-caching would be using IFRAME tags to pre-load pages/files > > that the page links to, so that while you are reading the current page, > > the pages you are likely to go to are already pre-caching. > > Which is a horrible, horrible nasty hack. No better or worse than archives, IMHO. > > Automated pre-caching would be using pre-caching software than > > automatically follows all links from the current page and puts them in > > your browser cache. > > Which is really, on the whole, no better. Putting what is is baiscally > only really a good idea on freenet into a general purpose webbrowser is not > a good idea. At some point, you have to deal with the basic fact that > freenet works differently. Alternatly, having external software fuck with > your browser is never a good idea. Having this implented as a FCP based > client will interact badly with pcaching (do NOT turn this into a pcaching > rant. I don't care.) Fundamentally, I don't see a diffrence between precaching and archives. Archives effectively pre-cache content in a slightly different way. The only thing that is different is the atomicity of the approach. > > > > That doesn't really call for a ZIP at all. You can just upload each > > > > file by the CHK, but in that case, you could make an edition type > > > > site that you could use for the "skins". > > > > > > Yes that would work, but it would be slow with high latency. Even if > > > the latency were as low as the WWW it would not really be good enough > > > to do this sort of thing. Having zips would allow you to have dosens > > > possibly hundreds of VERY SMALL images on a site as part of a theme. > > > > OK, for the first time there is an aspect of this that I might actually > > be able to swallow as vaguely sensible. But again, the same problems are > > there: > > > > 1) There would be no way to automatically decide what should be in an > > archive. > > actually, I have ideas on how to do exactly this. I just don't have time > to implent them right this minute. But, in short, you can do a depth first > or breadth search of HTML files, and create new archieves when you hit > siteLimit, resetting to the top of the tree if you're using depthFirst > searching. I was thinking about this myself, but for best results would probably require manual fiddling. I would also be rather wary of just splitting up the whole site into a bunch of archives. The thing that particularly concerns me is theatomicity of files and operations. It means that if an archive falls out of the network, the damage is more severe than for a single file. It also means that due to their size, archives are more likely to squeeze files out of the network. As I said before, I am concerned about the network storage space being used as a commodity when it isn't necessarily so. > Allowing inserts leads to abuse. Yes, there needs to be limits. But > here's the big secret: If you do something stupid, your page will be nigh > on unretrivable (despite the fact that it shouldn't be, for various reasons > that I don't really know enough about to comment on intelligably, larger > keys are more difficult to retrieve than smaller ones). And then no-one > will view it. And then it will drop off the network. Darwin wins again. I hope you are correct on that. I am more concerned by the potential collateral damage to other data that might end up being squeezed out of the network by the large archives. > > 3) Doing it manually would likely be beyond most people's abilities to do > > properly. It is a bit like shooting yourself in the foot with a rocket > > launcher. > > You severly underestimate a normal person's ability to operate winzip. I am talking about the optimization of what files to bundle together, not the ability of an individual to create a zip file. > > While this use (skins) would probably be OK, it would still be likely to > > create more problems than it solves, and good CSS is probably a better > > way to go than graphic heavy skins. > > > > And besides, latency with parallel downloads is not really an issue, as > > the latency for each download is suffered simultaneously, not in series. > > This is indeed true. But what *is* a problem, is latency *between pages of > a single site*, i.e. when you click on the next page of an extended > article, for example, five minute wait. oh joy. > > Containers aren't perfect. They aren't even a good idea. But so far, you > are yet to come up with a better one. As far as I can see, pre-caching would achieve precisely the same thing if done properly. Manual pre-caching methods already exist, but an automated one would probably be a better solution in the longer term, if tuned properly. I must say that I find the iframe and img tag pre-caching cludge engineeringly repulsive, too, but at least it is not polluting the node code. Gordan _______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
