On Wed, Jul 05, 2006 at 12:59:10PM -0400, Evan Daniel wrote: > I think I like the concept. I assume the basic goal is to create a > more stable version of KSKs? So that I can create the SSK "evand" and > have some hope that it will continue to point to my freesite / > freemail address etc?
KSKs are reasonably stable. The problem is they are squattable. > > It seems to me that creating 30 months worth of tokens on node > creation is a bit much. I would suggest either fewer tokens created > initially or create them more often. I think the number created > initially should approximately match the number that will be used by a > single insert, so that on average a new user can insert one SKK when > they create their node. Sure, numbers are subject to negotiation. I was thinking that we'd have one created per link per month but changed my mind at the last minute to one per node per month; obviously 30 on creation is too high. > > Having created my SKK, how do keep it intact? Do I have to > periodically reinsert it? You insert it, as a redirect to a USK. Then you can use the USK updating mechanisms. > It seems to me that if the nodes on the > network keep swapping locations, but the SKK can't follow its ideal > location while keeping propLevel = 2, that SKKs will eventually end up > in the wrong place even if they are frequently requested. No, the new store mechanisms require that frequently fetched keys are randomly reinserted. So it should propagate to its correct storage location. > > Also, it seems to me that I should have the option of setting my node > to always return my SKK keys from its local store even if it is in the > wrong location (obviously a security / performance tradeoff). This > lets me maintain the health of my SKK a little better than just be > reinserting it. If everyone does this, then the limits on SKKs will not work - at least not for popular ones. > > Another idea to prevent attacks on SKKs: When a node requests an SKK > and gets it (either local or non-local request), before it caches the > SKK it should try to retrieve it from all its other peers, as opposed > to just the optimal routing peer it asked first. If the different > peers return different answers, it should refuse to cache any of them. > I believe this would force an attacker to spend valuable tokens to > mount a viable attack as opposed to just setting up several nodes to > return bogus SKK results and hope they will be cached. I have been thinking mostly in terms of preventing spamming rather than preserving integrity. It is unlikely that your peers will have the SKK; but integrity should be reasonable, while not 100% guaranteed (it cannot be guaranteed for any human readable type), just as it is for KSKs now. > And by > intentionally misrouting, it may make it be not good enough for an > attacker to be in the right portion of the key space -- he would need > several nodes in the right place in topologically distant pieces of > the network. Eh? > > There should probably be a fairly low limit on the bucket size for > peers input queues; this should help solve the problem of how I > allocate tokens to a new connection by just keeping some spares > around. Indeed. > > Also, it might be wise to reserve a few of our output tokens for > locally generated requests, so as to prioritize them. Say I'm > connected to 6 nodes, so maybe I reserve my last 3 output tokens for > local inserts. I'm down to 3 output tokens, and an insert comes > along. I then queue it until I have 4 output tokens available, and I > have an output token available for the node I want to send it on to. I think this would be counterproductive. It might have security issues, but even if not, output tokens are specific to a particular node, which may not be the node we want to route to. > > My biggest concern about this is that a determined but short on > resources attacker might be able to create a few nodes (< 10) in > different places on the network, and that that might be enough to > mount an attack on a small number of SKKs. He'd be able to insert a few SKKs yes but it quickly damps out; and each time he'd have to get a different group of nodes to connect to. And he wouldn't be able to overwrite SKKs in any case, at least not if he wanted his requests to propagate to the best-node ... > > Evan Daniel > > On 7/5/06, Matthew Toseland <toad at amphibian.dyndns.org> wrote: > >Can we exploit the darknet, and token passing, to provide for artificially > >scarce KSKs? These would not be vulnerable to flooding or squatting. > > > >Scarce Keyword Key inserts require SKK tokens. > > > >For each peer, we have: > >- Output balance: We have N tokens from that peer, allowing us to send N > > SKK inserts to it when we want to. > >- Input balance: We will accept M SKK inserts from that peer > > immediately. > >- Maximum queue length: We will allow up to P SKK inserts to be queued > > from that peer at any given time. > > > >We also have a global bucket of tokens to be allocated to deserving > >peers. This is filled to say 30 on creation of the node, and one token > >is added every month. Tokens are allocated fairly to nodes' input balances, > >when there are nodes to allocate them to. > > > >When we receive an SKK insert, we use one token from the input balance > >(if there aren't any we reject it), and we queue it. If we can > >immediately allocate a token from the output balance of the node we want > >to route the SKK insert to, we immediately forward it. Otherwise we wait > >until we can. SKK inserts can remain queued for a long time because of > >the extreme scarcity of tokens; we provide for cancellation of a queued > >SKK insert, and confirmation that it is still active. > > > >SKK inserts *do not* create tokens when they complete (this is the main > >difference other than that of scale to the load balancing scheme). As > >stated above, tokens are created on the initial creation of the node, > >and periodically. > > > >SKK requests are exactly the same as any other request, except that SKKs > >do not have unlimited cache propagation. Specifically, if I fetch an SKK > >from the store of a node, it will tell me by setting propLevel=2. If > >somebody then fetches it from my cache, I will tell them propLevel=1. If > >somebody fetches it from their cache, then they will set propLevel=0, > >meaning that the SKK cannot be further propagated. The effect of all > >this is that while the origin servers should not be overloaded, the data > >cannot be propagated across the entire network without it being inserted; > >an attacker could propagate an SKK to nodes which send him requests for > >it, but these would be local unless he happens to have the right > >location. Obviously this would create a further incentive to attack the > >location swapping system, but that needs to be secured anyway (and can't > >be on opennet AFAICS). > >-- > >Matthew J Toseland - toad at amphibian.dyndns.org > >Freenet Project Official Codemonkey - http://freenetproject.org/ > >ICTHUS - Nothing is impossible. Our Boss says so. > > > > > >-----BEGIN PGP SIGNATURE----- > >Version: GnuPG v1.4.1 (GNU/Linux) > > > >iD8DBQFEq9QzOHFIJVywduQRAhF3AJsFYkjbJUgx75p308BxjuEasgUCpgCbBHUf > >nT8KB/5fKv8QaQjZ9NaNPS8= > >=jF+S > >-----END PGP SIGNATURE----- > > > > > >_______________________________________________ > >Tech mailing list > >Tech at freenetproject.org > >http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech > > > > > _______________________________________________ > Tech mailing list > Tech at freenetproject.org > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech > -- Matthew J Toseland - toad at amphibian.dyndns.org Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20060705/5681b45c/attachment.pgp>
