[ On Friday, August 11, 2000 at 10:21:15 (-0500), David Thornley wrote: ]
> Subject: Re: cvs-nserver and latest CVS advisory
>
> So, in Justin's case (and obviously not in Greg's), the additional
> accountability from SSH is not that important, and the barrier it
> presents is, in Justin's opinion, a more serious problem.
Justin doesn't think it's important. However without sufficient
accountability there are many many ways to compromise the reputation of
a product and/or its developers. He's protecting the wrong things, and
even though he's given value assesments of those things, he's still not
seeing that he's left his most valuable asset totally unprotected, and
he's wasted enormous costs protecting the things he values the least!
In an employment contract scenario accountability can be attained in
many ways, such as by restricting developers to a private, secure, LAN;
and of course authentication is secure because you really do know their
true identity and you have a legally binding contract with them.
However in an open source development project, which almost by
definition (and certainly in Justin's case) runs over a public Internet,
there is no way to establish any real accountability without using
strong authentication and authorisation, and without protecting the
integrity of the intermediate communications channel.
What I think you and Justin have both missed/ignored is the fact that
strong authentication means one hell of a lot more than just e-mailing a
password blindly to some more or less random e-mail address. As others
have explained already, if you grant trust without establishing some
basis in the real world for doing so then you've just put all your eggs
in a basket with no physical integrity. Good luck, too bad, so sad.
I.e. before you give me a password you *MUST* establish not only my
e-mail address(es), but also any other attribute(s) about me that you
might be able to independently corroberate with enough real world facts
so that you can be sure you've got the right guy. Trust and
accountability are based on real-world *identity* -- if you can't
establish a real-world identity of someone then you can't securely
authenticate that identity and you can't make anyone accountable for the
actions performed by a virtual system identity. Security policies are
not just technical controls such as passwords, permissions, and
encryption and so on! You can't hold me accountable for my actions if
you don't know who I really am. I've been treating these concepts as
well understood, but obviously they're not.
I'd suggest anyone interested in learning the basis for this stuff read
the famous "Orange Book" and perhaps the O'Reilly book "Computer
Security Basics" and/or some other more general overview too. The
concepts of authentication and accountability, and the importance of the
integrity of a trusted computing bases, are fundamental to understanding
systems security. Through the careful application of public key
cryptography and more general encryption we can now safely and securely
connect trusted computing systems over un-trusted networks. This fact,
together with a firm understanding of authentication and accountability,
allow us to implement secure public client/server systems.
Mis-understanding any of these things, or worse applying any technical
controls for the wrong reasons, will lead to insecure systems.
Knowingly doing so and claiming the result is more secure is unethical.
BTW, *how* you hold someone accountable is an entirely different
subject, and it's something that you need to write into your security
policy. However no matter whether you just dismiss that person, sue
them, charge them under the criminal system, perform a reverse attack on
them, or _whatever_, you *MUST* know their real-world identity in order
to do it!
> Again, if somebody would like to change Mac and Wintel CVS clients
> to use SSH, presumably protocol 2, that would be very useful. Until
> it happens, SSH is going to be a problem for people not working off
> Unix platforms. As long as that is the case, people like Justin have
> to make some hard decisions as to what policy will work best.
If I were in Justin's boots and I didn't have the ability to choose SSH
no matter how viable it might be, and I really didn't give a hoot about
the identity of my developers, and if I were 100% sure I could verify
the integrity of the resulting product independently, and of course if I
didn't highly value the time I would risk wasting if I had to rebuild
and restore the system(s) used for this, then I'd probably just set up
an environment like that at labs.pulltheplug.com and give everyone an
effectively anonymous guest shell account with no accountability
whatsoever. There are a lot of "if's" there, but that's the way he
wrote the rules, so that's the way it is..... :-)
What Justin is also doing, and what's unethical about it, is that he's
promoting a totally false sense of security, and one that's known to be
false! Yes it's true that I did not do the ethical thing and force
cvspserver to be ripped out of CVS long ago, but I think I've owned up
to that mistake by now....
Note that with a totally open setup like that at pulltheplug.com (or
indeed what was supposedly once in vogue at MIT), the trust is implicit
in the community and so long as there are no really bad apples who spoil
all the fun for everyone, good things can happen.
--
Greg A. Woods
+1 416 218-0098 VE3TCP <[EMAIL PROTECTED]> <robohack!woods>
Planix, Inc. <[EMAIL PROTECTED]>; Secrets of the Weird <[EMAIL PROTECTED]>