There are a number of reasons why Timm Murray is wrong about
permanence.  His reasons are about as well-thought out as crackpot
hubris, which makes me upset when looking at new protocols.

* USENET:  it is a problem that USENET looses its messages.
  Timm says that "nobody complains".  This is patently false.
  Everybody complains about it.  It is a large problem.  The main
  problem is that discussions never get more than about two weeks old,
  thus capping the inteligence and usefulness of USENET severely into
  the potty-training years of a young child.  Another big problem
  which is similar but different is the issue that people who stick
  around get tired of the repeats.  USENET is severely limited by its
  lack of permanence.  When people select USENET servers, the top four
  qualities people look for are reliability, speed, which groups it
  gets, and how long the messages stay; usually the last is the most
  important.

  Permanence would bring USENET back to usefulness.  Of course,
  methods for finally dealing with the SPAM will have to be made,
  but those are available using reviewing systems.
  
* There needs to be a seamless method to access all data, regardless
  of purpose or publicness.  If freenet is just for psychotic geeks
  who are doing strange paranoid things, then it won't go far.  If
  it on the other hand attempts to be a data store for every
  conceivable type and use of data, then it can really flourish.
  It is especially important today to make new data systems appease
  everybody, or as large an important subset of that as possible.
  This goes for any data communications and storage system, not just
  Freenet.

Permanence on freenet could be obtained in a number of ways:

* Marking a value of permanence on an object.  It would be weighed
  against others.
* Marking that usage counting should be done on an object, and then
  counting it.  (The usage data could be spread ala USENET -- to
  neighbors who also have it.  I have not read the standard, so I
  don't exactly know if this will spread in all directions.)
* People could agree to store objects in shared lists of reviewed
  objects according to whether the review says to save it, and how
  important it is.  So, for instance, if I make a review list with me
  and a few other friends, anything we review will be deleted in
  reverse order of its reviewed value.  So, for instance, using my
  reviewing scheme that I share with my friends, SPAM is reviewed at
  about level .1, and reasonable messages are at about level .7.  If
  the computer needs space, it will start by deleting the SPAM.

* Storing the file locally forever.  If you want something to last
  forever, you save it.
* People who agree to save the same object forever or for a similar
  amount of time can group up.  Multiple copies (for redundancy) must
  be saved.  When people want to leave the group, the objects will be
  redistributed.


The concept of unreliable communications being a stable medium of
discourse is untenable to me.  For a solid communications medium,
discussions have to get past a certain foundational state in order to
have value.  Often, in a system with rapid deletion, the foundations
to knowledge aren't even laid down before the destruction has already
ruined any chance of forwarding that knowledge.  This is
unacceptable.

Brad Allen <[EMAIL PROTECTED]>

P.S., calling me a newbie will not change the above facts.

P.P.S., for the luddites, I must reinsist that anonymity is a
variable, not an absolute.  The variability of it can still hold it
quite anonymously and still achieve great stability.  Giving up
stability just because of a false pretense of absolute anonymity is
just plain insane once you mature to the point of realizing that
absolute anonymity is impossible; at some level, you can be tracked,
whether it's watching you review the final rendering of an object, or
by watching you create it.  However, if those endpoints are made
difficult to track and the encryption is excellent, then the anonymity
variable goes up.  It in no way becomes infinite.  Understanding this
is absolutely necessary to understand that freenet and networks like
it must not limit their usefulness to just about nothing in the
context of a global communication of peace.  While permanence is not
necessary for all data, it is necessary for a lot of data, and
techniques must be created in order to specify which data.  More to
the point, the *amount* of permanence needs specification methods.
Generally, the more permanence required, the less anonymity available,
however a great deal of anonymity is still available for a rather
large amount of permanence.  Once the threshold of storing a
well-accepted conversation is achieved, then that conversation can
build up to a point of monumental usefulness, and people can achieve
various types of nirvana far above what a "newsified" medium like
USENET can achieve.  Knowledge is built up, not flatly splattered.
While the amount of buildup is greatly overdone, it still needs
buildup.  Reducing the overbuildup is impossible without an efficient
means of disseminating the buildup in the first place, and that means
computer networks.  The WWW will not work because if a company writes
a good document, that document then can be retracted or changed by the
company and any building done which requires that document (or others
like it which may also be changed or done away with) will be rendered
useless, for it can no longer be verified, nor often even understood.
Using the USENET method, the document will be deleted before many
people even have a chance to read it.  A middle ground must be made
where the message gets widely enough distributed that a deletion will
not hurt, but not so widely distributed that it will overburdon the
system (such as in USENET's traditional method of distribution).

PGP signature

Reply via email to