Algorithm-agnostic anonymization network.
Let's say we are agreed that a new anonymization network should be implemented. 
 One problem is that advances in such networks generally  require implementing 
entirely new networks to check out new algorithms and new features, such 
improvements are strongly deterred.  After all, that's one reason that TOR 
doesn't get as many improvements as we might like.  (Another reason is that it 
is financed, at least in part, by people who are hostile to a "too-good" 
anonymization system.)
Sure, we could implement a new set of nodes, hopefully at least 1000 in number. 
I think that ordinary, residential users should be able to run nodes. Internet 
services are provided with as much as 1 terabyte/month capacity, and possibly 
unlimited as well.  (CenturyLink 1 Gbps, for example)    We could implement a 
new onion-routing system, akin to TOR but with some improvements, most 
prominently adding chaff.  So far, so good.  But there may be other ideas, 
other improvements that people might want to try out.
I've already proposed that it should be possible for just about every node to 
be an output node.  Possibly every node should be an input node, as well.   The 
big impediment to this is that people naturally want to avoid the potential 
legal harassment they might get if their IP node sent out gigabytes of 'in the 
clear' forbidden data.  My ideas for a solution?  Output data could be 
encrypted, enough to make it unreadable except by the end recipient.  The 
operator of an output node that emits only seemingly-random data would be hard 
to hold legally responsible for that forbidden content, since nobody expects 
him to know how to convert it into plaintext.  And/or, the data can be output 
into two streams, which would be XOR'd with each other only by the intended 
recipient to find the data.  
And, this network could also run different anonymization algorithms, 
simultaneously.  Onion-routing may have its own limitations.  Somebody might 
have a good idea for an alternative system.  Why shouldn't it be possible to 
serve two algorithms?  Or dozens?  How about Bittorrent as well?  Imagine 1000 
nodes, each equipped with a 10-terabyte hard drive?  
                 Jim Bell

Reply via email to