On Tue, Oct 22, 2013 at 5:53 PM, DataPacRat <[email protected]> wrote:

> On Tue, Oct 22, 2013 at 5:02 PM, Phillip Hallam-Baker <[email protected]>
> wrote:
> > On Tue, Oct 22, 2013 at 4:14 PM, DataPacRat <[email protected]>
> wrote:
>
> >> I could suggest that the values be interpreted in terms of LaPlace's
> >> Sunrise formula - eg, "there's been 10 reports of the key being used
> >> falsely and 500,000 that it's been used successfully: Do you wish to
> >> continue?".
> >
> > This is why I would not attempt to use Bayesian logic.
> >
> > You have no way to measure probability reliably. An attacker can simulate
> > any behavior before they defect. The only measure that is useful is the
> cost
> > of simulating that behavior. If it is prohibitively high then we can
> decide
> > to trust them.
> >
> > Remember that Bernie Madoff paid out 100% of every redemption request
> right
> > up to the point where the money ran out.
>
> One thing using Bayesian/LaPlacian numbers /can/ do is indicate how
> much effort would need to have been exerted in order to simulate the
> behaviour. If implemented correctly, then put simply, you can't get to
> 40 decibans of confidence without having had 10,000 successful tests
> for every failed test.


 Like many powerful theory tools, Bayesian inference is perfect in theory
but useless for many of the purposes people try to use it for.

The problem with any analysis based on probability is that the confidence
can only diminish as the distance between the nodes increases. The problem
with Web of trust is that the generation loss between keysignings limits
the diameter of the trust graph and the amount of effort users will bear
limits the degree.

Taken together these give a maximum size to given by the Moore bound on the
number of nodes in the graph.


-- 
Website: http://hallambaker.com/
_______________________________________________
perpass mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/perpass

Reply via email to