Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-21 Thread CodesInChaos
 Right, that I agree with.  Packets should be deterministically created by
 the sender, and they should be verifiable by the recipient.

 Then you lose the better theoretical foundations of probabilistic signature
 schemes ...

If you drop receiver verification as a requirement, you can derive the
salt deterministically
from the private key and the message hash. Such a salt offers most of
the advantages of
a random salt, without needing actual randomness.
For DSA/Schnorr we already have some schemes that work this way. In
principle we could
apply this technique to RSA-PSS as well.

Personally I avoid randomness whereever possible. Not because of
worries about backdoors,
but because it's easier to use and test.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-19 Thread Thierry Moreau

ianG wrote:

On 18/11/13 20:58 PM, Thierry Moreau wrote:

ianG wrote:

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors



...


 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created
by the sender, and they should be verifiable by the recipient.



Then you lose the better theoretical foundations of probabilistic
signature schemes ...



If you're talking here about an authenticated request, that should be 
layered within an encryption packet IMHO, it should be the business 
content.




To clarify the original recommendation, is it correct to assume that the 
goal is to avoid subliminal channels through which key bits may be leaked?


If so, I don't see how a business content subliminal channel is a 
lesser concern than a signature salt field subliminal channel.


Defending against backdoors without inspection of an implementation 
details appears (euphemistically) challenging.



iang





--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-18 Thread coderman
On Sun, Nov 17, 2013 at 11:27 PM, ianG i...@iang.org wrote:
 In the cryptogram sent over the weekend, Bruce Schneier talks about how to
 design protocols to stop backdoors.  Comments?
...
 All random number generators should conform to published and accepted
 standards. Breaking the random number generator is the easiest
 difficult-to-detect method of subverting an encryption system. A corollary:
 we need better published and accepted RNG standards.


Intel still has not released raw access to their entropy sources;
RDRAND and RDSEED both passing through the conditioner (AES-CBC-MAC),
RDRAND also munged via AES CTR_DRBG (per NIST).

anything less than raw access to the entropy source samples inspires
no confidence...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-18 Thread ianG

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of
them is foolproof, but they are all useful. I'm sure there's more; this
list isn't meant to be exhaustive, nor the final word on the topic. It's
simply a starting place for discussion. But it won't work unless
customers start demanding software with this sort of transparency.

 Vendors should make their encryption code public, including the
protocol specifications. This will allow others to examine the code for
vulnerabilities. It's true we won't know for sure if the code we're
seeing is the code that's actually used in the application, but
surreptitious substitution is hard to do, forces the company to outright
lie, and increases the number of people required for the conspiracy to
work.



I think this is unlikely.  The reasons for proprietary code are many and 
varied, it isn't just one factor.  Also, efforts by companies to deliver 
open reference source and pre-built binaries have not resulted in 
clear proof of no manipulation;  it's often too difficult to reproduce a 
build process.


One of the slides indicated how many google protocols that the NSA had 
built engines for;  a big operation does have a lot of internal 
protocols.  There are reasons for this, including security reasons.




 The community should create independent compatible versions of
encryption systems, to verify they are operating properly. I envision
companies paying for these independent versions, and universities
accepting this sort of work as good practice for their students. And
yes, I know this can be very hard in practice.



This is the model that the IETF follows.  They require two independent 
versions.  Yet, with that requirement comes a committee and a long 
debate about minor changes;  which many have criticised does more harm 
than good.


This argument was used to wrest control of SSL away from Netscape.  Did 
the result make us any safer?  Not really, even though there were bugs 
in the internal SSL v1, it rested heavily on opportunistic encryption, 
which would have given us a much bigger defence against the NSA's mass 
surveillance programme.


The jury's not yet out on the question of whether the CA/PKI thing is a 
benefit or a loss.  What would happen if the next set of Snowden 
revelations were to show evidence that the NSA promoted the PKI as a 
vulnerability?


What would happen if we just handed the change management of SSL across 
to google (as a hypothetical pick) ?  Would they do a worse job or a 
better job than PKIX ?




 There should be no master secrets. These are just too vulnerable.


OK.


 All random number generators should conform to published and
accepted standards. Breaking the random number generator is the easiest
difficult-to-detect method of subverting an encryption system. A
corollary: we need better published and accepted RNG standards.



But:  the RNG is typically supplied by the OS, etc.

What seems more germane would be to include usage of both the OS's RNs 
and also to augment those in case of flaws.  Local randomness exists, 
and it typically available at an application level, in ways it is not 
available at OS level.


If we had a simple mixer and whitener design, that did not disturb the 
quality of the OS nor the local source, surely this would be far better 
than what we got from NIST, et al?




 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created 
by the sender, and they should be verifiable by the recipient.




But, overall, when it comes down to it, I think the defence against 
backdoors is not really going to be technical.  It think it is more 
likely to be attitude.


The way I see things, the chances of a backdoor in say Silent Circle are 
way down, whereas the chances of a backdoor in Cisco are way up.  Cisco 
could do all the things above, and more, and would still not increase my 
faith.  SC could do none of the things above, and I'd still have faith.


I think we are still waiting to see which companies in the USA are 
actually going to stand up and fight.  Some signs have been seen, but in 
the aggregate we're still at the first stage of grief -- denial.  In the 
aggregate, it seems that the Internet has just slipped back to the old 
international telco days -- every operator is a national champion, and 
is in bed with their national government.


Pure tech or design can't change that.  Only people can change 

Re: [cryptography] Design Strategies for Defending against Backdoors

2013-11-18 Thread Thierry Moreau

ianG wrote:

On 18/11/13 10:27 AM, ianG wrote:

In the cryptogram sent over the weekend, Bruce Schneier talks about how
to design protocols to stop backdoors.  Comments?



To respond...


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors



...


 Encryption protocols should be designed so as not to leak any
random information. Nonces should be considered part of the key or
public predictable counters if possible. Again, the goal is to make it
harder to subtly leak key bits in this information.



Right, that I agree with.  Packets should be deterministically created 
by the sender, and they should be verifiable by the recipient.




Then you lose the better theoretical foundations of probabilistic 
signature schemes ...




--
- Thierry Moreau

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Design Strategies for Defending against Backdoors

2013-11-17 Thread ianG
In the cryptogram sent over the weekend, Bruce Schneier talks about how 
to design protocols to stop backdoors.  Comments?


https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of 
them is foolproof, but they are all useful. I'm sure there's more; this 
list isn't meant to be exhaustive, nor the final word on the topic. It's 
simply a starting place for discussion. But it won't work unless 
customers start demanding software with this sort of transparency.


Vendors should make their encryption code public, including the 
protocol specifications. This will allow others to examine the code for 
vulnerabilities. It's true we won't know for sure if the code we're 
seeing is the code that's actually used in the application, but 
surreptitious substitution is hard to do, forces the company to outright 
lie, and increases the number of people required for the conspiracy to work.


The community should create independent compatible versions of 
encryption systems, to verify they are operating properly. I envision 
companies paying for these independent versions, and universities 
accepting this sort of work as good practice for their students. And 
yes, I know this can be very hard in practice.


There should be no master secrets. These are just too vulnerable.

All random number generators should conform to published and 
accepted standards. Breaking the random number generator is the easiest 
difficult-to-detect method of subverting an encryption system. A 
corollary: we need better published and accepted RNG standards.


Encryption protocols should be designed so as not to leak any 
random information. Nonces should be considered part of the key or 
public predictable counters if possible. Again, the goal is to make it 
harder to subtly leak key bits in this information.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography