I don't have any specific ideas for how to choose whether to ignore
identities, but I think you're making the problem much harder than it
needs to be.  The problem is that you need to prevent spam, but at the
same time prevent malicious non-spammers from censoring identities who
aren't spammers.  Fortunately, there is a well documented algorithm
for doing this: the Advogato trust metric.

The WoT documentation claims it is based upon the Advogato trust
metric.  (Brief discussion: http://www.advogato.org/trust-metric.html
Full paper: http://www.levien.com/thesis/compact.pdf )  I think this
is wonderful, as I think there is much to recommend the Advogato
metric (and I pushed for it early on in the WoT discussions).
However, my understanding of the paper and what is actually
implemented is that the WoT code does not actually implement it.
Before I go into detail, I should point out that I haven't read the
WoT code and am not fully up to date on the documentation and
discussions; if I'm way off base here, I apologize.

The Advogato metric is designed from the ground up to have strong
spam-resistance properties.  In fact, it has a mathematical proof of
how strong they are: the amount of spam that gets through is limited
by the number of "confused" nodes, that is nodes who are not spammers
(or simple shills of spammers), but who have mistakenly marked
spammers as trustworthy.  The existence of this proof is, to me, so
compelling an argument in favor of using the metric that I believe any
changes to the algorithm that do not come with an updated version of
the proof should be looked upon with extreme suspicion.

I'll leave the precise descriptions of the two algorithms to those who
are actually writing the code for now.  (Though I have read the
Advogato paper and feel I understand it fairly well -- it's rather
dense, though, and I'd be happy to try to offer a clearer or more
detailed explanation of the paper if that would be helpful.)  However,
one of the properties of the Advogato metric (which the WoT algorithm,
AIUI, does not have) is worth discussing, as I think it is
particularly relevant to issues around censorship that are frequently
discussed wrt WoT and Freenet.  Specifically, Advogato does not use
negative trust ratings, whereas both WoT and FMS do.

The concept of negative trust ratings has absolutely nothing to do
with the arbitrary numbers one person assigns to another in their
published trust list.  Those can be on any scale you like, whether
it's 0-100, 1-5, or -100 to +100.  A system can have or not have
negative trust properties on any of those scales.  Instead, negative
trust is a property based on how the trust rating computed for an
identity behaves as other identities *change* their trust ratings.
Let's suppose that Alice trusts Bob, and is trying to compute a trust
rating for Carol (whom she does not have a direct rating for).  Alice
has trust ratings for people not named, some of whom have ratings for
Carol published.  If the trust computation is such that there exists a
rating Bob can assign to Carol such that Alice's rating of Carol is
worse than if Bob had not rated her at all, then the system exhibits
negative trust behaviors.

This is, broadly, equivalent to the ability to censor a poster of FMS
or WoT by marking them untrusted.  There has been much debate over the
question of censoring posters never, only for spamming, for spamming
plus certain objectionable speech, what should be objectionable,
whether you should censor someone who publishes a trust list that
censors non-spammers, etc.  In my opinion, all of that discussion is
very silly to be having in the first place, since the answer is so
well documented: simply don't use a trust metric with negative trust
behaviors!

The problem of introductions, etc is not magically solved by the
Advogato algorithm.  However, I don't think it is made any harder by
it.  The dual benefits of provable spam resistance and lack of
censorship are, in my opinion, rather compelling.

Evan Daniel

On Wed, May 6, 2009 at 5:00 PM, xor <x...@gmx.li> wrote:
> Hello,
>
> I am currently refactoring the WoT plugin to allow per-context trust values.
>
> Lets first explain how WoT currently works so you can understand what I mean:
> - There is a set of Identities. An identity has a SSK URI, a nick name, a set
> of contexts (and a set of properties). An "own identity" is an identity of the
> user of the plugin, he owns the SSK insert URI so he can insert the identity.
>
> - Each identity can offer a Set<String> of contexts. A context is a client
> application, currently there are: "Introduction" (the
> given identity publishes captchas to allow other identities to get known by
> the web of trust by solving a captcha - if you solve one, you get on the
> publisher's trust list) and "Freetalk", which is the messaging system based on
> WoT (comparable to FMS) which I am implementing.
>
> - Identities currently can give each users a trust value from -100 to +100.
> Each trust relationship is stored as a object of class Trust in the database.
>
> - From all Trust values an identity has received, from the trust values the
> trusters have received, etc. the WoT calculates and stores a "Score" object
> for each identity in the database. The score is the calculated rating the
> which identity receives from the whole of the other identities.
>
> Not only one Score object is stored - for each pair of [OWN identity
> "treeOwner", identity "target"] a score object is calculated. The reason is
> that the score of an identity depends on what trust the treeOwner gives to the
> identity itself, what trust its trusters have received, their trusters, and so
> on.
>
> .... Considering the fact that there will be many more client applications
> (for example: Search indexing, files haring, web of trust based code review
> (image Freenet compiling its updates itself as soon as all code has received
> enough "review trust")). we came to the conclusion that it must be possible to
> assign a trust value which is only valid in the context of a single client
> application and does not affect any others. If someone publishes crap in the
> file sharing that does not prove that he is not able to produce nice messages
> in Freetalk, and vice versa.
>
> Therefore, I will implement per-context trust and score now. I will not wait
> with the implementation until Freetalk even though Freetalk is important
> because the per-context trust values change the database model very much and
> writing code to convert legacy databases is possible but not guaranteed to not
> have any bad side effects. Having a "finished" database model before deploying
> stuff is a good idea.
>
> Now the question is: Which logic should be used to decide when to ignore an
> identity, i.e. when to NOT download it anymore and not add the identities
> which it trusts to the database?
> Several ideas:
> - Have a global "trust list trust" as in FMS which is a rating of whether the
> identity as a whole is trustworthy and should be downloaded, only decide upon
> the score value there whether to download it or not, ignoring all scores from
> client apps.
> - Download it as long as ONE client app has positive score for that identity:
> Not possible because then someone could create zillions of spam identities to
> spam the WoT and keep every WoT-plugin downloading them by writing good posts
> in Freetalk but not adding the Freetalk context to the spam identities so
> Freetalk users might not notice them. In general, this attack could be used
> with any other client app where it is easy to gather positive trust.
> - Download as long as ALL client apps have positive trust: Not possible, the
> whole goal of per-context trust was to prevent identities from being judged as
> a whole just by their misbehavior in ONE client app.
>
> - Any other ideas by anyone?
>
> - Because I do not have any other ideas, I came to the conclusion that it is
> necessary to have a separate "global" trust context (could be called
> "TrustList" or "WoT" or whatever) which is used to judge whether an identity
> should be downloaded or not. This means that each client app will need UI to
> manage 2 trust values per identity. This sucks in terms of usability and I'd
> like to prevent it, yet IF we want to prevent it someone must come up with a
> smart idea for the "download decision".
>
> So does anyone has an idea which prevents the need of a "global" trust
> context? If not, I will implement it with the global context and call it
> "TrustList".
>
> Greetings, xor
>
>
> _______________________________________________
> Devl mailing list
> Devl@freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
>
_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to