On Tuesday 28 October 2003 01:38 am, Martin Stone Davis wrote: > > This has been dealt with as far as I am concerned. All we have to do is > > cache normally - in the short term, cache if and only if pcaching says > > so, taking into account client access, and have a higher level cache of > > only what is requested by the user, which is not accessible to requests > > from off the node, which always caches in a strict LRU, is wiped on > > startup, and uses one-time in-RAM encryption keys. > > ...which protects us from the "You have too many naughty keys" attack. > To protect us from the "You tried to HIDE too many naughty keys" attack, > we also need premix routing. Right?
Well it protects you from both, just not the "You requested an unusually large number of naughty keys." If you are probabilistic caching you won't ever have too few because they ARE in your store, and as long as caching is probabilistic and not random it would be hard to look at your data store and say you have too many, because you would only cache them if you were in the last X% of the request chain, so it is likely that you would have seen those requests anyway. What you don't want to do is say "Well that should defend us, so to be safe we shouldn't have a 100% cache on our node." The reason you can't do this is because then if you refresh a browser a few times, then you have requested all that data several times. Not only would this make the contents of your store harder to explain, it would be obvious on the network with just a few nodes monitoring you that a LOT of those data requests are comming from your node. So you can make this sort of attack fairly diffcult by implementing an second encrypted store. However to truly solve it you should have premix routing. This also has the advantage that if the encrypted data store is broken, it's contents are still defensible. _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
