On 18/11/13 10:27 AM, ianG wrote:
In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?


To respond...

https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of
them is foolproof, but they are all useful. I'm sure there's more; this
list isn't meant to be exhaustive, nor the final word on the topic. It's
simply a starting place for discussion. But it won't work unless
customers start demanding software with this sort of transparency.

     Vendors should make their encryption code public, including the
protocol specifications. This will allow others to examine the code for
vulnerabilities. It's true we won't know for sure if the code we're
seeing is the code that's actually used in the application, but
surreptitious substitution is hard to do, forces the company to outright
lie, and increases the number of people required for the conspiracy to
work.


I think this is unlikely. The reasons for proprietary code are many and varied, it isn't just one factor. Also, efforts by companies to deliver "open reference source" and pre-built binaries have not resulted in clear proof of no manipulation; it's often too difficult to reproduce a build process.

One of the slides indicated how many google protocols that the NSA had built engines for; a big operation does have a lot of internal protocols. There are reasons for this, including security reasons.


     The community should create independent compatible versions of
encryption systems, to verify they are operating properly. I envision
companies paying for these independent versions, and universities
accepting this sort of work as good practice for their students. And
yes, I know this can be very hard in practice.


This is the model that the IETF follows. They require two independent versions. Yet, with that requirement comes a committee and a long debate about minor changes; which many have criticised does more harm than good.

This argument was used to wrest control of SSL away from Netscape. Did the result make us any safer? Not really, even though there were bugs in the internal SSL v1, it rested heavily on opportunistic encryption, which would have given us a much bigger defence against the NSA's mass surveillance programme.

The jury's not yet out on the question of whether the CA/PKI thing is a benefit or a loss. What would happen if the next set of Snowden revelations were to show evidence that the NSA promoted the PKI as a vulnerability?

What would happen if we just handed the change management of SSL across to google (as a hypothetical pick) ? Would they do a worse job or a better job than PKIX ?


     There should be no master secrets. These are just too vulnerable.

OK.

     All random number generators should conform to published and
accepted standards. Breaking the random number generator is the easiest
difficult-to-detect method of subverting an encryption system. A
corollary: we need better published and accepted RNG standards.


But:  the RNG is typically supplied by the OS, etc.

What seems more germane would be to include usage of both the OS's RNs and also to augment those in case of flaws. Local randomness exists, and it typically available at an application level, in ways it is not available at OS level.

If we had a simple mixer and whitener design, that did not disturb the quality of the OS nor the local source, surely this would be far better than what we got from NIST, et al?


     Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.


Right, that I agree with. Packets should be deterministically created by the sender, and they should be verifiable by the recipient.



But, overall, when it comes down to it, I think the defence against backdoors is not really going to be technical. It think it is more likely to be attitude.

The way I see things, the chances of a backdoor in say Silent Circle are way down, whereas the chances of a backdoor in Cisco are way up. Cisco could do all the things above, and more, and would still not increase my faith. SC could do none of the things above, and I'd still have faith.

I think we are still waiting to see which companies in the USA are actually going to stand up and fight. Some signs have been seen, but in the aggregate we're still at the first stage of grief -- denial. In the aggregate, it seems that the Internet has just slipped back to the old international telco days -- every operator is a national champion, and is in bed with their national government.

Pure tech or design can't change that. Only people can change it, by making deployment decisions.



iang
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to