On Thursday 24 July 2003 19:48, Tom Kaitchuck wrote:
> On Thursday 24 July 2003 12:02 pm, Gordan wrote:
> > On Thursday 24 July 2003 17:06, Tom Kaitchuck wrote:
> > > This is not true if all the Freesite insertion utilities do it
> > > properly. Toad said that it only supports up to 1MB after compression.
> > > This means that the entire container will ALWAYS be a single file.
> > > (Never broken up into chunks.)
> >
> > I understand that, but even so, it means that to view even the front
> > page, you have to download a 1 MB file.
>
> I doubt that just the HTML for most sites would ever approach 1MB.
> Especially if it was zipped.

In that case let's limit it to 64 KB or 128 KB. It's still a cludge, but at 
least the damage is kept reasonably small. 1MB is just ridiculous.

> > 1) Not everybody is on broadband
> > 2) I am not sure Freenet speed is good enough to deal with that
> > 3) Even if Freenet can deliver on this kind of bandwidth, it will create
> > huge amounts of totally unnecessary network traffic. I don't know about
> > your node, but mine has always eaten all bandwidth allocated to it very,
> > very quickly.
>
> If it is done like I describe, how is there ANY more unnecessary traffic
> than before?

Active links from other sites? I don't necessarily want to go and see all the 
sites that have active links on another site, for example. And you are 
increasing the amount of traffic that active linking would generate, from 
just the active link to the active link plus the front page plus potentially 
more data. Imagine how much extra bandwidth you will waste every time 
somebody goes to visit one of the index sites.

I know that you disagree, but this is not a sensible trade-off.

> The only thing that is new is that you would get all the HTML
> for the site you are loading, and not just the first page when you request
> it.

Why? I don't necessarily want to see all of it. Why should I waste bandwidth 
for all of it rather than just for the page I want where I want it? I have to 
say that latency in Freenet is becoming good enough for this not to be an 
issue.

> This isn't that much data and given that on Freenet the transfer time
> for a key is small compaired to delay before you get it, this is not a bad
> thing.

The dominance of latency is shrinking daily, and whatever you win there, you 
are looking to loose in the extra transfers and globaly wasted bandwidth as 
you start unnecessarily moving larger files.

> This would greatly improve the users experience and enable Freesites
> to consist of many small pages instead of one huge one with #links.

I am not at all convinced about the user experience argument. Additionally, a 
"huge page with links" will become nigh-on impossible to use sensibly if for 
each active link you have to download the whole linked site. See point above 
about index sites.

> > What if somebody has more than 1 MB work of imges on their front page?
> > They are not going to compress, so that benefit goes out the window, and
> > they will not fit in the archive, so that goes out too.
>
> That is true, but that is intentional. If someone has many big images, or
> one huge image, there is no point in including them in the archive.
> Because, like you said, they don't compress, and you would have to do a
> splitfile. These would be uploaded on the network, just like they do now.
> The point of a zip is to allow you to have many small files without
> fetching each individually.

I still don't think it is a worthwhile tradeoff. There will always be cases 
such as NIM that will make sure you cannot defeat the problem completely, and 
the gain just doesn't seem that great. Where is is really deemed necessary, 
manual pre-caching using IFRAME tags and automated pre-caching using 
specifically designed software for the purpose are IMHO a better solution. 
Using archives simply isn't the correct tool for the job, as far as I can 
see. It is a very ugly exception to the way Freenet is supposed to work.

> > > Then for dbr and edition sites, the utility should save a list of the
> > > files that were previously zipped together. This way it is sure to do
> > > it the same way every time, and it can add another zip if there are
> > > enough new images.
> >
> > Are you suggesting that a new version of the site requires the old
> > version's archives to be loaded? Surely not...
>
> I suggesting that the Image Zips be inserted under a CHK, that all versions
> of the site can reference. Or if they decide they want to change their
> layout, upload a new ZIP. The point is that all versions of a site, and
> multiple different sites can share a single ZIP theme.

That doesn't really call for a ZIP at all. You can just upload each file by 
the CHK, but in that case, you could make an edition type site that you could 
use for the "skins".

Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to