I have been thinking for a while now about a method of creating a deniable p2p network, that is still (of the order of) the same speed as conventional open p2p networks. What follows is the result of this thought process.
Apologies if this is obviously trivial and pointless ;) This network would allow a user to upload a file. The file is converted to pieces and these pieces are what reside on peers' machines. Any given piece on a users machine can be a component in ANY file in the network. That is, it can be a component in an arbitrary number of files. This is where the deniability comes in. If a data chunk can be a component in hundreds of entirely different files, then how can the uploader be liable. Not only that, but the data is random. It has no meaning by itself. Only when combined as part of a chain can it be rendered into the original file. The only draw back of the whole system is that the user is required to download twice the total data. This is possibly offset by the potential gains in total file availability, allowing users to host pieces that can be part of many files. The operation of this network is described as follows: 1) Alice wishes to place a file on the network (file A). This is the first file to be added to the network: a) Firstly she splits the file into many equisized pieces. b) She then generates random blocks of data, the same size as the pieces of the original file, these are called r1, r2, ..., rn c) She then logically places alternately a random piece and a file piece in order as follows: r1|A1|r2|A2|...|rn|An d) Using a random start piece (C), a chain is built up by performing a one time pad on the next data chunk and the result of the previous one time pad as follows (there appears to be no ascii xor symbol, so I used a + instead): C -> + -> S1 -> + -> Q1 -> + -> S2 -> + -> Q2 -> + ... + -> Qn ^ ^ ^ ^ ^ ^ r1 A1 r2 A2 r3 An e) r and Q are now the data chunks that are shared on the network f) Each data chunk points to the next but one data chunk in the chain resulting in the inability of a holder of any arbitrary chunk to reconstruct the whole chain. C points to the first 2 chunks resulting in both offset chains being available (perhaps 2 random start pieces are required - C1 and C2 - so that they look identical to all other chunks in the network - I haven't really thought deeply about this). 2) Bob wishes to place a file on the network (file B). He does this using the same procedure as Alice, only instead of using random data for r1, r2, ..., rn, he uses r and Q chunks that ALREADY exist on the network. That is, the chunks on the network become integrated into the chain that makes up file B. 3) We now have 2 files on the network that are inextricably linked. They *are* the same pieces of data. These 2 can be extended to any arbitrary number of files. The more files, the more cross over occurs and the better the deniability. 4) Now Charlie wishes to download a file from the network. Charlie acquires C chunk with a unique key, perhaps from freenet or a friend or a website. He can now construct the chain intially by traversing an example of every data chunk and finding where that key takes him. He can now identify every chunk that is required to rebuild the chain and hence get back to the original file (it is trivial to work backwards from all r, Q and C to the original file). All this would require some kind of distributed tracking system. This certainly is not my strength, but i have some ideas. The net effect, as mentioned earlier, is that every chunk downloaded cannot be said to be definitively a component in any file. I'd appreciate some feedback on this. I feel very sure that there is at least some potential in some kind of chained set of one time pads (like above). Thanks, hen _______________________________________________ chat mailing list chat@freenetproject.org Archived: http://news.gmane.org/gmane.network.freenet.general Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/chat Or mailto:[EMAIL PROTECTED]