On Tuesday 15 July 2008 09:08, Daniel Cheng wrote:
> 2008/7/15 Matthew Toseland <toad at amphibian.dyndns.org>:
> > 1. Many people have proposed over the years that we have a "bulk" flag 
which
> > can be set when the timing of a request is less important (e.g. for 
splitfile
> > fetches), or a priority class for a request which is visible at the 
network
> > layer. I have always opposed this mostly because it makes traffic 
profiling
> > slightly easier and any sort of priority scheme would need careful 
regulation
> > to prevent race-to-the-top.
> >
> > 2. Long-term, and in particularly nasty places, Freenet will have to be 
mostly
> > darknet, because it is much easier to attack opennet nodes, or to block 
them
> > in bulk. One of the biggest practical problems with a pure darknet is the
> > 24/7 issue: more people have laptops than have real PCs nowadays, and this
> > trend is likely to continue and accelerate, but even if people have a 
desktop
> > PC, many users won't run it 24x7 for various reasons: power consumption,
> > noise, security (with encrypted disk, do you want to leave it 
unattended?),
> > etc etc. Fanless home server appliances might be able to run 24x7, but 
that
> > means additional expenditure to buy them.
> >
> > 3. FMS, even more than Frost, makes heavy use of SSK polling, and this is
> > likely to expand as the network grows and FMS becomes more newbie 
friendly.
> > Also various innovative applications require fast propagation of data once
> > inserted (although there are frequently security issues with this). And
> > widely-wanted data which is hard to find can be effectively polled by much 
of
> > the network, causing excessive load.
> >
> > 4. The solution to SSK polling etc is some form of passive requests. In 
0.7,
> > we have ultra-lightweight passive requests, which are a very limited and
> > unreliable mechanism but nonetheless should help significantly. The basic
> > principle of ULPRs is that once a request completes, each node on the 
network
> > remembers who wants the data and who it has asked for it, for a short 
time,
> > without making any effort to reroute if connections are lost; if the data 
is
> > found it is propagated quickly to everyone who wants it.
> >
> > 5. True passive requests (0.9) would be a mechanism whereby a node could 
send
> > out a request, which once it failed would be remembered permanently, 
subject
> > to a (long) timeout and/or periodic renewal from the originator. It would 
be
> > automatically rerouted if the network topology changes. Passive requests
> > would introduce a number of new technical challenges such as load 
management
> > for persistent requests, evaluating a peer's competence in performing 
them,
> > and so on, but they could greatly reduce the cost of SSK polling,
> > rerequesting common but absent data, and enable such things as medium
> > bandwidth high latency publish/subscribe for for example audio streams.
> > Passive requests would probably have to have a priority level setting. 
It's a
> > big job, but a big prize...
> >
> > 6. Passive requests would go a long way to solving the uptime problem. Say 
you
> > have a small darknet, say 5 nodes. Its nodes are only online during 
evenings
> > local time. Its only connection to the outside world is through one node
> > which is connected to two of the small darknet, which is only online on
> > Thursdays. Right now, except on Thursdays, the network would be 
essentially a
> > leaf network: our real-time routing assumes that the network is fully
> > connected. Most data will be very difficult to obtain. Real-time routing
> > requires real-time load balancing, which means that all the nodes would
> > request whatever it is they want constantly, generating load to no good
> > purpose, except on Thursdays when the requests would get through, but
> > severely limited by load management, and by the fact that more than one of
> > the small darknet may be asking for the same file. So on Thursdays, some
> > progress would be made, but often not very much.
> >
> > Now, with true passive requests, things can be very different. From the 
user's
> > point of view the semantics are essentially the same: they click a link, 
it
> > gets a DNF (fairly quickly), and they click the button to queue it to the
> > global queue; some time later, they get a notification that the content is
> > available. But performance could be much higher. If a node requests a 
block
> > while the network is "offline", the request will propagate to all 5 nodes,
> > and then sit there waiting for something to happen. When we connect to the
> > wider network, the request is immediately rerouted to the node that just
> > connected (either because it's a better route, or because there are spare
> > hops). It propagates, fails, and is stored as a passive request on the 
wider
> > network, hopefully reaching somewhere near the optimal node for the key. 
When
> > the link is lost, both sides remember the other, so when/if the data is 
found
> > on the wider network, it is propagated back to the originator. 
Furthermore,
> > the load management would be optimised for passive requests: when the 
small
> > network connects, it can immediately send a large number of passive 
requests
> > for different blocks of the same file or for different files. These are 
not
> > real-time requests, because they have already failed and turned into 
passive
> > requests; so they can be trickled out at whatever rate the recipient sees
> > fit. Also, they are not subject to the anti-polling measures we have
> > introduced: Polling a key in 0.7 means requesting it 3 times, sleeping for
> > half an hour, and repeating ad infinitum. Further similar measures may 
need
> > to be introduced at the node level to try to deal with increasing load 
caused
> > by FMS, but because we reroute on getting a connection, we can immediately
> > route the requests. When we reconnect, hopefully our peer will have found
> > most of the data we requested and can transfer it at link speed (or 
whatever
> > limit may be imposed for security reasons). The transfer might take longer
> > than the intersection, but I expect the whole system will be significantly
> > faster than it would be now. It's even better if you have more than two
> > network fragments: on a large darknet you might have subnetworks coming
> > online and going offline constantly, so that you never actually have a 
fully
> > online network. Passive requests would happily search out every relevant 
nook
> > and cranny of the network.
> >
> > Note that much of this is only feasible on darknet, because of the trust
> > connection: on opennet, passive requests probably will have to last only 
as
> > long as the connection is open, and bulk transfer of passive requests is
> > certainly not feasible on opennet.
> >
> > With regards to security, it may be possible to determine whether an FMS
> > poster (for example) is on the local network, if you know when his posts 
come
> > in. This is of course feasible now on such a topology, but on the other 
hand
> > if nobody uses it because it's unusable, there's no threat. Passive 
requests
> > would probably make it a little easier. Some form of tunneling, preferably
> > with long client controlled delays for inserts, might help to solve it, 
but
> > we would have to have a way of determining that the network is too small 
to
> > provide useful anonymity.
> 
> Is DoS possible under this scheme?
> 
> let's say....
>  (1) a attacker introduce lots of "imaginary" nodes.
>          (just some imaginary location and announcements ...)
>  (2) put lots of request from these imaginary locations
>       wait for the passive request propagate to the whole network,
>  (3) upload the actual content.....
>  (4) all node will then try to send the content to those "imaginary" nodes,
>       as these location never exist, all nodes will have to busy routing to 
some
>       non-exist location..
> 
> Passive request make this attack much less resources intensive.
> Or did I missed anything here?
> 
> I think DoS is a much easier attack then determining the poster using this.
> 
Passive request data goes back along the current path; it doesn't get routed 
to a location.

However, you do have a point in that if we use a different load management 
scheme for passive requests, and cache the data on each node, often more 
quickly than we can relay it to the next, it might be exploitable. We will 
need to have some sort of limit on the amount of damage that any given node 
can cause, such that the number of passive requests you can have pending is 
limited by the number of edges you have connecting to the wider network.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20080715/99104ea6/attachment.pgp>

Reply via email to