On Sat, May 23, 2009 at 10:10 AM, Matthew Toseland <toad at amphibian.dyndns.org> wrote: >> > We want to make it easy, or nobody will do it. Poring over your trust list >> > day after day is not most people's idea of fun. >> > >> > There are three approaches, given positive trust only. Depending on the >> > level of effort exerted by the spammer, we move from one tradeoff between >> > spam resistance and censorship resistance to the next. IMHO the last stage >> > involves significant risk of censorship or at least collateral damage, >> > while obviously having the strongest spam resistance. >> > >> > The first approach is to mark spammers as spammers, and limit the capacity >> > of trusted identities to create new spammers by for example limits on the >> > number of identities that can change in a trust list in one day. This >> > means that everyone will have to mark all the spam identities as spam, >> > much as in Frost with the Alice bot. It will deter newbies, but it should >> > be usable for the determined. Note that it is *essential* on a positive >> > trust only network that our spam markings override others' positive trust >> > levels. >> > >> > The second approach is when we mark an identity as spam, WoT realises that >> > an identity trusting that spammer also trusts a lot of other spammers, and >> > proposes that we mark the parent identity as a spammer, at least for >> > purposes of trust list trust. Hopefully this will be enough. The cost for >> > every user will be to mark a few spammer posts as spam, and then accept >> > WoT's recommendation to mark the parent as spammer. "A few" will be an >> > arbitrary parameter that will have to be argued about, higher means less >> > chance of marking non-spammers as spammers, but at the cost of seeing more >> > spam. >> > >> > The third approach is that when we mark the parent identity as spam, WoT >> > suggests marking those who trust the parent identity also as spammers for >> > purposes of trust list trust (if we trust them; if we don't, it's not our >> > problem; we are trying to optimise the network *for other people*, >> > particularly for newbies, here). We can try to be polite about this using >> > ultimatums, since it's likely that they didn't deliberately choose to >> > trust the spam-parent knowing he is a spam-parent - but if they don't >> > respond in some period by removing him from their trust list, we will have >> > to reduce our trust in them. This will cause collateral damage and may be >> > abused for censorship which might be even more dangerous than the current >> > problems on FMS. However, if there is a LOT of spam, or if we want the >> > network to be fairly spam-free for newbies, the first two options are >> > insufficient. :| >> >> I'm not certain you're correct about this. ?The first two methods are, >> imho, sufficient to limit spam to levels that are annoying, but where >> the network is still usable. ?Even if they download a bunch of >> messages, a new user only has to click the "spam" button once per >> spamming identity, and those are limited in a well defined manner >> (linear with modest coefficient with the number of dummy identities >> the spammer is willing to maintain). >> >> My suspicion is that if all they can aspire to be is a nuisance, the >> spammers won't be nearly as interested. ?There is much more appeal to >> being able to DoS a board or the whole network than being able to >> mildly annoy the users. ?So if we limit the amount of damage they can >> do to a sane level, the actual amount of damage done will be >> noticeably less than that limit. > > Can we agree that we should implement the second option in WoT then?
Yes, though I think it should default to only alerting the user in the case of relatively obvious abuse. As part of that I think it should give the identity in question a modest grace period to correct its trust list (if trust lists are published daily, 2 days seems appropriate). Given the churn rate limits, I don't think that's an issue. >> >> There is another possible optimization we could do (I've just thought >> of it, and I'm not entirely certain that it works or that I like it). >> Suppose that Alice trusts Bob trusts Carol (legitimate but confused) >> trusts Sam (a spammer), and Alice is busy computing her trust list. >> Bob has (correctly) marked Sam as a spammer. ?In the basic >> implementation, Alice will accept Sam. ?Bob may think that Carol is >> normally correct (and not malicious), and be unwilling to zero out his >> trust list trust for her. ?However, since this is a flow computation, >> we can place an added restriction: when Alice calculates trust, flow >> passing through Bob may not arrive at Sam even if there are >> intermediate nodes. ?If Alice can find an alternate route for flow to >> go from Alice to Carol or Sam, she will accept Sam. >> >> This modification is in some ways a negative trust feature, since >> Bob's marking of Sam as a spammer is different from silence. ?However, >> it doesn't let Bob censor anyone he couldn't censor by removing Carol >> from his trust list. ?Under no circumstances will Alice using Bob's >> trust list result in fewer people being accepted than not using Bob's >> trust list. ?It does mean that Bob, as a member of the evil cabal of >> default trust list members for newbies, can (with the unanimous help >> of the cabal) censor identities in a more subtle fashion than simply >> not trusting anyone. >> >> The caveats: this is a big enough change that it needs a close >> re-examination of the security proof (I'm pretty sure it's still >> valid, but I'm not certain). ?If it sounds like an interesting idea, I >> can do that. ?Also, I don't think it's compatible with Ford-Fulkerson >> or the other simple flow capacity algorithms. ?The changes required >> might be non-trivial, possibly to the point of changing the running >> time. ?Again, I could look at this in detail if it's interesting >> enough to warrant it. > > Worth investigating IMHO. OK, I'll examine it further. Evan Daniel