On Thu, Jul 24, 2003 at 06:02:14PM +0100, Gordan wrote:
> On Thursday 24 July 2003 17:06, Tom Kaitchuck wrote:
> > > Except that for a site with more than 2 pages, this becomes extremely
> > > cumbersome to separate manually. An automated approach could be used by
> > > analysing html and linked documents, but this has other limitations, such
> > > as how do you decide how to separate the files into archives? What about
> > > when you have to put one file into all archives? How difficult will it be
> > > to come up with "auxiliary" archives that have files accessed from
> > > multiple pages?
> > >
> > > It is logically incoherent, and cannot be dealt with in a way that is
> > > both generic and consistent. Therefore, I believe it should not be
> > > catered for, especially as it doesn't add any new functionality, and the
> > > benefits it provides are at best questionable.
> >
> > This is not true if all the Freesite insertion utilities do it properly.
> > Toad said that it only supports up to 1MB after compression. This means
> > that the entire container will ALWAYS be a single file. (Never broken up
> > into chunks.)
> 
> I understand that, but even so, it means that to view even the front page, you 
> have to download a 1 MB file.
> 
> 1) Not everybody is on broadband

Everyone with a permanent node is though. But yes, there is an issue
with huge containers and dialup...
> 2) I am not sure Freenet speed is good enough to deal with that

Freenet has a big problem with latency, but it's bandwidth isn't all
that great either for a single file.
> 3) Even if Freenet can deliver on this kind of bandwidth, it will create huge 
> amounts of totally unnecessary network traffic. I don't know about your node, 
> but mine has always eaten all bandwidth allocated to it very, very quickly.

No, it will REDUCE the network traffic - one large file instead of fifty
small files.
> 
> While it may improve reliability of some sites in some cases, where they may 
> not work 10% of the time for 10% of the people, it seems that it will create 
> more problems than it will solve. The concept of unique files by key (i.e. no 
> duplication) was very good, and now we are effectively destroying it.

Only if the site author chooses to do so. And the site author can do
anything up to and including writing his own client software. It is
entirely possible to use ZIP containers in an appropriate way.
> 
> > The proper way for utilities to handle this, IMHO, would be to put all the
> > HTML into one container. If it doesn't fit, or doesn't have enough
> > "headroom" for edition sites, split it based on depth, so that there are
> > two zips about the same size. Then make a separate zip for images. The only
> > images that should be included are those that appear on multiple pages, or
> > on the main page, excluding active links. They should be sorted biased on
> > size and then only the smallest 1MB worth should be zipped. There in never
> > any real reason for a single edition site to have more than one image zip.
> 
> What if somebody has more than 1 MB work of imges on their front page? They 
> are not going to compress, so that benefit goes out the window, and they will 
> not fit in the archive, so that goes out too.

Then they use multiple archives, or they use them as single images. No
big deal.
> 
> > Then for dbr and edition sites, the utility should save a list of the files
> > that were previously zipped together. This way it is sure to do it the same
> > way every time, and it can add another zip if there are enough new images.
> 
> Are you suggesting that a new version of the site requires the old version's 
> archives to be loaded? Surely not...
> 
> Gordan

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to