On Wed, Jul 18, 2012 at 3:32 PM, James A. Donald <[email protected]> wrote:

> One universal metric of trust for all is necessarily authoritarian and
> inaccurate.  N metrics of trust for N peers are apt to run into scaling law
> problems on very large networks.
>

On the contrary, I think each peer doing their own data crunching and
determining who they trust is a lot more scalable solution than having a
central trust tracker that has to scale to all peers.


> Further, if one applies an algorithm that requires data for the entire
> network, there probably will not be data for the entire network.  For large
> networks, need an algorithm where some small and manageable number of nodes
> are more important in evaluating trust than most other nodes, more central.
>

The algorithm doesn't need the totality of the data on the network, rather
each peer has a "local" view of all of the peers they particularly interact
with, and in general you will want to optimize that set of peers to be ones
who are "closest" to you in the network topology as measured by bandwidth,
latency, etc.

Aside from the problem of avoiding malicious peers, figuring out the
network topology is a hard problem in and of itself. I know e.g. P4P has
been proposed for this. Using collaborative filtering to predict which
peers will have the most bandwidth/lowest latency when you connect to them
is a way of attempting to solve this problem (although unlike P4P the
solutions it finds may not be the most cost effective for ISPs)


> Slope One is not strictly applicable to a large peer to peer network. It
> is applied, as at Amazon.com, to a star network.  What one needs is an
> algorithm that becomes slope one in the extreme case that a single central
> node is the one node that matters, but which is tolerably efficient in the
> case that there are a moderate number of central nodes that are more
> important than the others.


Slope One is just an example of the type of algorithm I would like to use.
That said, I probably wouldn't use Slope One.

The important point is that each peer applies the collaborative filtering
algorithm independently, based on their limited knowledge of the network,
and input into the algorithm their empirical knowledge of other peers, and
secondhand data as collected from other peers they have decided to trust
(i.e. they haven't detected behaving maliciously themselves), or perhaps
they can choose to include a ranked subset of their best "trading partners"

-- 
Tony Arcieri
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to