Hello  Peter --

I hate to be the one to tell you that the following is provably false:

"The unlying (sic) assumption here is that trust is a transitive relationship," 

Which leaves a bit of a gapping hole in your entire logical build...

"A trusts B" and "B trusts C"  does not imply anything about "A trusting C".

Even though you might decide to act on this assumption, believing it is true.

If trust is as transitive as you suggest, then it must also be true that if:
A trust B, and B trust C, and C trust D, and ... ; then A must also trust Z.

When you prover the latter true, then you have a chance at correctness: 
But even that is only true for a chain of 25 relationships.

If this assumption is true, then you must be a very gullible person, 
which I somehow seriously doubt;-)...

Cheers...\Stef

At 11:56 -0700 5/28/03, Peter Deutsch wrote:
>g'day,
>
>Christian Huitema wrote:
>...
> > There is an obvious issue with the protocol route: from a protocol point
> > of view, it is quite hard to distinguish unsolicited commercial e-mail,
> > which we would label spam, and unsolicited acceptable e-mail, which
> > could be more than welcome.
>
>I've sat on this posting, give both the previous volume of the thread,
>and the danger of getting too far into implementation discussions on a
>general list, but I've seen several postings to the effect that "it's
>hard to see what we could do at the protocol level". Personally, I think
>the problem would benefit from a little lateral thinking, so here's my
>cut at it. Those who tire of my ramblings, and don't want a quote from
>Tom Lehrer and allusions to Queen Elizabeth and Francis Drake, should
>hit "n" now...
>
>
>Concepts such as Hashcash or other payment-oriented systems, in which
>you try to impose a cost on the sender to screen out bulk mailers, are
>interesting enough, but I think they're addressing the wrong problem.
>I've personally come to the conclusion that to address this problem
>(that is, the decision as to whether I want to accept a message from
>you), I don't actually need to know who you are, or even what you're
>trying to send me, and I certainly don't need to impose artificial costs
>on you (since this looks too much like punishing the innocent for the
>crimes of the guilty).
>
>The heart of the issue seems to be:
>
>    What I really need to decide is whether I
>    should trust you enough to start accepting
>    unsolicited email from you (or more generally,
>    to start interacting with you for purpose x).
>
>
>I don't know what Paul Vixie meant by his allusion to a
>trusted-introducer model but I'd love to hear more about that, since
>I've been working on something that sounds a lot like that and I'd like
>to compare notes and seem how much reinventing of the wheel I'm
>currently doing (and to thrash the deceased equine a bit, mail I sent to
>him died in his maze of twisty little email filters, all the same, so we
>seem to have yet another existance proof that this problem is imposing
>costs on us and hurting our ability to use the Internet for useful work.
>To quote Tom Lehrer, "but I digress").
>
>
>One way I can learn whether I want to interact with you is to invest
>some time building up a trust relationship with you (verifying you are
>who say you are, etc). Another way would be for me to ask people I
>already trust who might know you to issue you a reference.
>
>My conclusion is that we need something along the lines of a
>peer-to-peer trust authentication token system which would allow the
>first time caller to authenticate him/herself to new MTAs by allowing
>him/her to demonstrate his/her prior participation in existing trust
>relationships. The idea is to build something more oriented towards
>peer-to-peer relationship building, rather than the traditional PKI
>central authority model we've seen so far.
>
>In practice, I imagine it would be most practical, if we want to
>interact with the traditional store-and-forward email system, for my
>friends to issue signed, non-forgable "trust tokens" (I think I called
>them "letters of marque" in a recent posting, since they convert
>"pirates" to "honest businessmen and women", just as Queen Elizabeth did
>with Francis Drake... ;-) The redeemer would hand a set of these trust
>tokens to me on first contact (you could embed them in the mail header,
>or we'd negotiate which trust tokens I accept when you connect, or I'd
>publish a list in "son-of-DNS" which trust groups I'm currently
>trusting, etcetcetc).
>
>When I receive such a {set of) trust token{s}, I'd check them to see if
>they really do come from folks I trust and otherwise satisfy my current
>policy requirements (e.g. I may need at least three valid tokens, or I
>may need at least two, including one from somebody in Sweden, etc), and
>if so I could start granting the bearer interaction privileges. *** The
>unlying assumption here is that trust is a transitive relationship, *** 
>and thus I'd allow myself to build up webs of such trust relationships
>by accepting references and references from references, so I wouldn't
>have to rely solely on a single, centralized PKI infrastructure.
>
>Now, in practice, I'd like to be able to adjust my policies based upon
>the qualitity of your references (say, requiring x references from group
>a, but only y references from group b to begin transacting). This has
>the beneficial side effect of putting the policy decision making into
>the hands of the grantor of privileges. All I require from the issuer of
>certificates are that I can prove they come from the issuer, and that
>they contain certain information (such as issuer, issue date, TTL, etc)
>- the decision about what to do when they're valid rests with the
>redeemer.
>
>Obviously I'd also need to be able to issue my own letters of marque,
>once I've developed sufficient trust in you (say, after I've experienced
>z satisfactory interactions without incident, or maybe when I click a
>button in my MUA because I like what I'm getting). Instinct tells me
>this "degree of good behaviour" could be quantified and automated, but
>"electing to start trusting" is not something you need to do on every
>message so need not be computationally easy. As a sidebar, the policies
>I use in granting my trust certificates would go far in determining to
>what degree they have later redemption value for others.
>
>And finally, I'd like these letters of marque to be revocable in
>something close to real time, since sometimes you get it wrong, and
>sometimes folks go bad. This doesn't necessarily mean I want to be
>contacted for every redemption (in fact, sometimes this will be
>impractical) but assigning such tokens a TTL and/or a refresh time and
>otherwise allowing some controls over redemption policy seems to be a
>good idea.
>
>Signing a record consisting of an ID, date and some other useful data
>with a private key seems to be just the ticket here, since it can be
>rapidly decrypted and checked but not easily forged. There is probably a
>lot more to it that this, since I probably don't care much about the
>cost of signing (something I wont do every interaction, only when
>agreeing to issue a new letter of marque) but I definitely want forging
>to be hard, and verifying to be cheap. This is classic public key stuff,
>although it looks like it would be seful if you could show *both* that
>you're Alice, *and* that you're a member of a specific group (ie. you
>need to transmit two secrets, one you own and one you share with your
>"trust buddies". This may be nothing more than using a group key to sign
>your ID within your token (making it a secret shared only by the turst
>ring), but it might also be nice if you can perhaps automatically get
>the group secret when you've been accepted by enough members of the
>group.
>
>I remember work from the crypto community about sharing n parts of a
>secret (that is, you need at least n parts of the secret to recreate it)
>that could be relevant here, although I haven't gone back and tracked it
>down yet. The idea would be that as you collect references, you collect
>pieces of the secret and when enough folks trust you, you have the info
>needed to start generating and issuing your own certificates as part of
>the club. In my notes on this, I make reference to being able to
>"blackball" a potential member, since issuing him an invalid token in
>effect keeps him out of the club. This area needs more thought, but it
>holds the promise of automating the implementation of membership
>management, which could be nice...
>
>
>I'm still chewing on what's suitable stuff to put into the token, but it
>lends itself to a fair number of things, when you think about it. You
>might add in there info about which domains to allow, which user names,
>which protocols to allow, etc. The idea is to delineate the areas and 
>scope for which trust would apply.
>
>The storage of the needed public keys probably could go into a "son of
>DNS" easily enough. Extensions to SMTP to request and redeem tokens
>obviously would need some thought, although I suspect you could embed
>the tokens in the mail header easily enough and thus perhaps not have to
>mess with the wire protocol at all, at least in initial implementations
>(you'd still need some way of requesting and/or issuing new trust
>tokens, but that could be taken out-of-band to another protocol).  It
>might even be better to make this a part of a whole new protocol, since
>as Paul points out, the current architecture appears to have some
>fundamental flaws and it may be time to stop patching it up and get on
>with the new architecture. This is where I'd like to  calibrate myself
>against what Paul alluded to, to see where he was going with that.
>
>Note that this idea would have some interesting side effects (or should
>I call them "social effects"?) You begin life in a cold, dark, lonely
>world with little trust and less functionality but over time, you build
>up useful relationships which can serve you well. Your "coin of the
>realm" here is your own credibility, so trying to pass forged trust
>tokens rapidly pollutes your swimming pool, while valuable trust
>relationships would suck in more trustworthy folks and benefit from
>positive networking effects. Today's great mailing list would be
>tomorrow's "trust ring", where any mail from anyone trusted by your
>"list buddies" is accepted, mail from random Taiwanese MTAs is rejected
>or tarpitted.
>
>The guy who issues the certificates that start the flood of spam into
>your mailbox gets deprecated as an authority with a click of a button.
>The spammer (or legitimate stranger) who can't prove his bone fides gets
>shunted to the slow lane over near the tarpits until he can prove
>himself over time. And over time, you can elect to promote well-behaved
>strangers to friends, expanding your trust ring.
>
>My instinct tells me there's something here worth pursuing, given the
>magnitude and growth rate of the current spam problem, but I doubt that
>the IETF general mailing list is a place to work on this, so I'm
>interested to see where the discussion  goes once the flames die
>down....
>
>As I said, I've done some digging and found nothing exactly like this,
>but Paul's casual remark suggests I'm missing something basic in the
>literature (admittedly I haven't done an exhaustive search yet).
>Pointers to the obvious work, or pointers to the obvious holes, would be
>most welcome. And of course, pointers to the best mailing list for
>follow-ups are probably a *really* good idea....
>
>
>
>                        - peterd
>
>
>
>
>
>-- 
>---------------------------------------------------------------------
>    Peter Deutsch                       [EMAIL PROTECTED]
>    Gydig Software
>
>                        "Bungle..."
>               "That's an 'i', you idiot..."
>                  "Oh, right. 'Bingle..."
>
>                            - Red versus Blue...
>
>---------------------------------------------------------------------


Reply via email to