On Thursday 07 May 2009 11:27:44 Matthew Toseland wrote:
> On Wednesday 06 May 2009 23:02:11 Evan Daniel wrote:
> ...
> > I'll leave the precise descriptions of the two algorithms to those who
> > are actually writing the code for now.  (Though I have read the
> > Advogato paper and feel I understand it fairly well -- it's rather
> > dense, though, and I'd be happy to try to offer a clearer or more
> > detailed explanation of the paper if that would be helpful.)  However,
> > one of the properties of the Advogato metric (which the WoT algorithm,
> > AIUI, does not have) is worth discussing, as I think it is
> > particularly relevant to issues around censorship that are frequently
> > discussed wrt WoT and Freenet.  Specifically, Advogato does not use
> > negative trust ratings, whereas both WoT and FMS do.
> > 
> > The concept of negative trust ratings has absolutely nothing to do
> > with the arbitrary numbers one person assigns to another in their
> > published trust list.  Those can be on any scale you like, whether
> > it's 0-100, 1-5, or -100 to +100.  A system can have or not have
> > negative trust properties on any of those scales.  Instead, negative
> > trust is a property based on how the trust rating computed for an
> > identity behaves as other identities *change* their trust ratings.
> > Let's suppose that Alice trusts Bob, and is trying to compute a trust
> > rating for Carol (whom she does not have a direct rating for).  Alice
> > has trust ratings for people not named, some of whom have ratings for
> > Carol published.  If the trust computation is such that there exists a
> > rating Bob can assign to Carol such that Alice's rating of Carol is
> > worse than if Bob had not rated her at all, then the system exhibits
> > negative trust behaviors.
> > 
> > This is, broadly, equivalent to the ability to censor a poster of FMS
> > or WoT by marking them untrusted.  There has been much debate over the
> > question of censoring posters never, only for spamming, for spamming
> > plus certain objectionable speech, what should be objectionable,
> > whether you should censor someone who publishes a trust list that
> > censors non-spammers, etc.  In my opinion, all of that discussion is
> > very silly to be having in the first place, since the answer is so
> > well documented: simply don't use a trust metric with negative trust
> > behaviors!
> > 
> > The problem of introductions, etc is not magically solved by the
> > Advogato algorithm.  However, I don't think it is made any harder by
> > it.  The dual benefits of provable spam resistance and lack of
> > censorship are, in my opinion, rather compelling.
> 
> The current bootstrapping mechanism relies on negative trust. A newly solved 
> captcha has to yield some amount of trust, or nobody will see the new 
> poster's messages and he won't gain any permanent trust. Which means if he 
> starts spamming, he should be blocked when somebody, or some number of 
> somebody's, say he is spamming, and *NOT* only when the person he 
> bootstrapped through says he is spamming. If we do the latter, then a 
spammer 
> would simply announce through idle Freetalk instances: if the user does not 
> visit the node, he won't mark the spammer as a spammer, and therefore nobody 
> will be able to mark him down except by marking down the node he 
bootstrapped 
> through.
> 
> What is the alternative? AFAICS making the node through which the 
introduction 
> takes place personally responsible is not going to work. That leaves:
> - The trust gained from solving a captcha could be limited in duration and 
> number of messages. After a certain number of messages or a certain time 
> (whichever comes first), it could disappear.

Obviously we would need to automatically give out trust on replying to a 
message, or something, to make this work.

BUT here's the big problem with this:

Once a user has given an identity some trust, they are eternally responsible 
for keeping that trust up to date.

This sucks with different contexts, or even different boards. If an identity 
posts interesting traffic on one board, but posts endless spam on another 
board, many people on the first board who do not subscribe to the second 
board will have him rated highly. Those who subscribe to the second board 
will not rate him. Everyone on the second board will have to mark down the 
spammer manually, and they will probably mark down the people in the first 
board too, resulting in a schism!

Of course, it also depends on what you mean by spam. Because there is a single 
Freetalk message output queue, which even those reading his posts on the 
first board have to subscribe to, there can be limits on how many messages he 
can post in a given period without getting automatically marked as a spammer. 
This limits the impact of this attack.

On a larger scale, an identity might get trust points from chatting, while 
inserting bogus files to a filesharing system; those giving him trust from 
the former don't know about the latter so it would be unfair to punish them.

Fundamentally, once you have given somebody some trust, you are responsible 
for their good behaviour *in all contexts* in which that trust is valid, 
including those that you are not a part of.

Now lets suppose that the spammer creates an identity to engage in meaningful 
conversation with, and then creates a bunch of other identities which he 
marks as trustworthy, which he uses to spam - preferably to spam boards other 
than the one that he is chatting on. In a positive trust only system, he can 
again gain trust from the one board, and then spend it by spamming the other 
boards, and without running into any message posting quotas. Separate trust 
list trust might help, but only if it can be negative: How would you get 
trust list trust anyway? The only practical means of evaluating whether to 
give an identity positive trust is by the messages it posts!

On the other hand, when an identity is marked as a spammer, we can prompt the 
user on whether he wants to reduce the trust level on the sponsoring identity 
as well. We can display any other spam identities that he trusts, and we can 
even automatically reduce the trust on the identities that trust a spammer. 
This will again tend to cause schism, unless trust list trust can be 
tentative. Tentative trust would be obtained automatically along with message 
trust, but can be ignored if there are non-tentative scores.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090507/25716b7b/attachment.pgp>

Reply via email to