[If freenet-tech were "post only by members", I would gladly move this discussion there. But I argue that it is relevant here, because if my theory is correct, the routing layer of Freenet only needs to concern itself with efficiently moving fixed length random files around. The current implementation does that ok, but if some constraints were lifted, it could do better. All of my stuff is done at the application layer. It could even be done by a completely separate network of nodes which communicate only about formulas, and which occasionally insert files into freenet.]
On Sat, 2003-05-03 at 17:49, Mark J Roberts wrote: > I don't understand how this relates to compression. It doubles the > size of all downloads, and the storage required for a given file > does not change. This is a _legal_ theory, which as we all know, has nothing to do with the real world. _Legally_, the information becomes compressed, not _actually_. The goal of this legal compression is to make obvious the absurdity of the concept of IP. As Professor Eben Moglen remarks, "[the] intellectual property system [is] a tripartite oxymoron like Voltaire's Holy Roman Empire." http://emoglen.law.columbia.edu/my_pubs/nospeech.html In another paper, "Anarchism triumphant: Free Software and the Death of Copyright," http://emoglen.law.columbia.edu/my_pubs/anarchism.html, Moglen points out exactly what my "legal" compression is intended to make obvious: the law must treat some numbers differently than others, and this is absurd. By linking many different documents to the same random numbers, I expose the contradiction in the law, because the law now needs to treat the _same_ number in several different ways. It can't do that, and so the fixed length random files in Freenet cannot be interdicted. Only the tiny formulas for reconstructing huge documents from those fixed length files can be interdicted, and we need to design a system so that the law ends up tripping all over itself in attempting to enforce this interdiction. > > > Thus, all freenet nodes are continually requesting > > randomly selected CHK's and are continually inserting > > new ones. No one can tell which traffic is random > > and which is directed at inserting or obtaining > > some document. > > This is nice, but I suspect that it will not make real-world > traffic analysis much more difficult. You might be right. More detailed analysis would be necessary to see. But I don't care. My goal is not to make the world safe for child porn. It is to make the absurdity of copyright obvious, so that enforcement of copyright becomes ridiculous, and so that civil disobedience on a wide scale becomes feasible. > > > [...] > > How do I trust the legitimacy of a given formula? What prevents an > attacker from advertising tons of false formulas for a file? Because he gets a bad reputation, and everyone starts ignoring him. You can't lie without getting found out. A false formula involves stating that CHK b was created from CHK a by encryption with a specific key. The lie is easily found out. The occasional key collision would look the same as a lie, but the statistics would be different (happens to everyone about the same number of times). There is no difficulty in revealing which node did the encryption, because at the time the node made the calculation, it had not been notified of any legal duty to refuse to deal with CHK a. > > They have to take the whole network down, and erase > > all of the disks to be sure of getting rid of one > > document. > > One can never be sure, I'll agree, but they can destroy the CHKs > indicated by the redundant formulas as quickly as they get them. You don't reveal the new formulas until after a delay so that it is likely that the new CHKs have been used in several new formulas. In fact, you don't reveal the _first_ formula for a given document until after there are many different formulas available. -- Ed Huff _______________________________________________ devl mailing list devl at freenetproject.org http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
