On Thu, May 10, 2001 at 10:34:05AM -0700, Mr. Bad wrote:
> >>>>> "BC" == Benjamin Coates <[EMAIL PROTECTED]> writes:
>
> BC> What's the point of attaching this highly descriptive stuff to
> BC> the file at all? Wouldn't this belong in an seperate
> BC> indexing/searching layer? The sort of metadata you need in a
> BC> CHK is stuff like content-type or part-number or whatever,
> ^^^^^^^^^^^^
> BC> that lets you interpret the data you have, not author or
> BC> keywords or subject (that lets you find or categorize the
> BC> data)
>
> Content-Type is descriptive metadata. So is size, duration, etc. See
> Dublin Core for details.
>
> I see no particular reason not to attach Dublin Core metadata to CHKs,
> except some people's over-enthusiastic worriedness about having the
> SAME DATA in Freenet TWICE. DC metadata does allow some free-form
> fields, and theoretically people could diddle one of these fields and
> reinsert the CHK.
>
> But, HELL, man! I can put the same MP3 in Freenet 50 different times,
> with 50 different bit streams! I can put a 30 different versions of
> the same scan of a 1992 Playboy centerfold, each one totally
> indistinguishable from the other! Diddle a bit, the CHK is different!
Stuff from analog, sure. There is only 1 (non-corrupt) WAV from a given track
of a given CD. And there are only so many encoders. And most people use 128kbps.
There is a possibility for substantial redundancy, but see below.
>
> If we consider this a hostile act, we need to seriously rethink this
> entire enterprise. If Napster (e.g.) or alt.binaries.pictures.erotica
> are any indication, we will have VASTLY REDUNDANT DATA in Freenet. The
> case where two people try to insert the same song with the same
> bitstream is going to be much less common than two people trying to
> insert the same song with DIFFERENT bitstreams. Why even worry about
> the first case, if we can't handle the second?
Because it is vastly more efficient if we do get collisions for identical
bitstreams. Napster works because everyone caches one copy of their data, and
it has a perfect (but horribly centralised) routing algorithm. Having said that,
if all large files are splitfiles for increased throughput and to reduce the
space cost of 0.4's mandatory power of 2 filesizes, (most MP3s would qualify as
"large files"? What was the threshold going to be?), then it is probably a near
total nonissue.
>
> ~Mr. Bad
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Mr. Bad <[EMAIL PROTECTED]> | Pigdog Journal | http://pigdog.org/
> freenet:MSK@SSK@u1AntQcZ81Y4c2tJKd1M87cZvPoQAge/pigdog+journal//
> "Statements like this give the impression that this article was
> written by a madman in a drug induced rage" -- Ben Franklin
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
Always hardwire the explosives
-- Fiona Dexter quoting Monkey, J. Gregory Keyes, Dark Genesis
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://lists.freenetproject.org/mailman/listinfo/devl