Trust networks are difficult to get working right, especially in a
decentralized fashion. Trying to figure out what exactly I mean by
"trust network" is hard in of itself, even for me! But here's my best
shot at what I think we would want in a trust network, and what would
make it most likely to work.

What I want to do is make a trust network that gets separated into
semantic categories or "actions". For instance a man who brought me
flowers a thousand times might not be a man I trust with my life. But a
man who saved my life I would be inclined to trust more to do so a
second time. Contrarily, a man who saved my life, I would still have no
reason to trust him to be a reliable source of flowers. So trust itself
is a categorical sort of thing. It depends not just on how much they did
for you in the past, but on how much of what they did for you in the
past. If a man murdered your friend with an axe, you would have very
good reason to trust that this man will now murder the rest of you, and
hopefully take measures to take his axe away. You certainly would want
to preserve that trust relationship (to avoid being axe murdered) and
yet you don't want it to spill over into your other trust relationships,
as an axe murderer may be a very untrustworthy person with loaning money to.

I think there would have to be therefore some kind of semantic language
behind any trust network. Each person would have a list, not just of who
they trust, but who they trust to perform what action. If the action is
beneficial, such as someone repaying your loan, or if the action is
malignant, such as someone skipping out on payments, it's still a matter
of trust. You're trying to predict how they will act in the future, and
the more reliable your predictions can be, the less you get fooled by
scams and con artists.

Newsgroups have one such language. Each group is named by hierarchical
topic. A poster in one group might act trollish and brusque, but while
posting in another group would act proper and modest, and they might use
the same PGP key for both groups. That happens a lot actually, that the
environment of the group determines what attitude a poster will bring to it.

But the PGP Web of Trust has no such categories. In fact it doesn't
refer to trust at all, but merely a way to extend already centralized
identity tracking systems. What I'm talking about is a different kind of
trust network, where in the example of newsgroups you would have three
values for each trust entry: the signing key you're trusting, which
newsgroup you're talking about, and how much you trust them to post that
which you prefer to read. The action in effect is always "posting what
you want to read" but it adds the clause "in group _____" since many
people post different kinds of messages in different groups. A sage at
theoretical physics could be a blind wisecracking idiot at fly fishing.

Another possible way to conceptualize these actions are those online
text dungeons accessible through telnet. Not the Nethack style ones, but
the Interactive Fiction style ones. Generally they work by having a
number of actions you can perform in any given situation. "sit" "stand"
  "attack" "go" "jump" and so forth. Each of these "action objects"
regardless of the actual words or syntax used for them, is a separate
semantic concept, which you could use as the basis for trusting people.
If someone benefited you in the past with the "give" action, you would
not be then obliged to trust them not to abuse the "steal" action.

So that's pretty much what I would like to do in trust networks, is
implement a trust network that determines real value, or at least value
as closely as humanly possible to predict. In doing so there would be
different "kinds" of trust, perhaps including meta-categories that
affect the trust network itself (such as the often sought after action
of "recommending trustworthy people to me"). Each category of trust is a
separate semantic concept, though not necessarily independant. For those
concepts which depend on each other, the trust can bleed over to some
degree, but some concepts may be so diametrically opposed (axe murdering
and money lending) that trusting in one will actually reduce the trust
in the other.

In the end I think we can fashion a sort of trust network, where we have
a good idea what we can provide from each other in the future, and a
good idea of who we can depend on to do what. Their government identity
isn't as important as whether or not they will behave benevolently to
you, and even someone with no identity at all would be able to create an
identity with zero reputation, and spend time committing selfless acts
to build up enough of a reputation that others will trust them in return.

There are a few problems with such a network. People building up trust,
then betraying all at once for instance. I think those problems are
present in any of our current systems however, and though this system
might not be able to catch all of the problems that society faces, it
would still be a great improvement over the haphazard helplessness we
all now suffer from, never knowing exactly who to trust.
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to