Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Alan Braggins
On 23 September 2013 01:09, Phillip Hallam-Baker hal...@gmail.com wrote:
 So we think there is 'some kind' of backdoor in a random number generator.
 One question is how the EC math might make that possible. Another is how
 might the door be opened.

Are you talking about http://en.wikipedia.org/wiki/Dual_EC_DRBG#Controversy
or hypothetical RNGs in general, maybe not even EC based?


 I was thinking about this and it occurred to me that it is fairly easy to
 get a public SSL server to provide a client with a session key - just ask to
 start a session.

For an RSA key exchange without ephemeral DH, the _client_ generates
the premaster secret from which the session key is derived.

However, ClientHello and ServerHello both contain random numbers sent
before key exchange. If you are intercepting traffic, you have a nonce generated
shortly before the session key generation for every key exchange, even without
starting sessions of your own.

Possibly you can use the client nonces to reduce the search space for
the session
keys (and if it's an RC4 session key, maybe the biases in RC4 help?).
(Or, if using DHE, maybe it helps find DH private keys.)

And possibly if you have server nonces based on the same PRNG seed as was
used when the RSA key was generated, you can search for the RSA key.

-- 
alan.bragg...@gmail.com
http://www.chiark.greenend.org.uk/~armb/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 I was thinking about this and it occurred to me that it is fairly easy to get 
 a public SSL server to provide a client with a session key - just ask to 
 start a session.
 
 Which suggests that maybe the backdoor [for an NSA-spiked random number 
 generator] is of the form that ... you get a lot of nonces [maybe just one] 
 from the random number generator ... and that allows the [next one to be 
 predicted more easily or even the] seed to be unearthed.  One simple way [to 
 stop this] would be to encrypt the nonces from the RNG under a secret key 
 generated in some other fashion. 
 
 nonce = E (R, k)
 
 Or hashing the RNG output and XORing with it 
 
 nonce = R  XOR H(R)
You shifted from random value to nonce.  Given the severe effects on 
security that using a nonce - a value that is simply never repeated in a 
given cryptographic context; it may be predictable, even fixed - to a random 
value, one needs to be careful about the language.  (There's another layer as 
well, partly captured by unpredictable value but not really:  Is it a value 
that we must plan on the adversary learning at some point, even though he 
couldn't predict it up front, or must it remain secret?  The random values in 
PFS are only effective in providing forward security if they remain secret 
forever.)

Anyway, everything you are talking about here is *supposed* to be a random 
value.  Using E(R,k) is a slightly complicated way of using a standard PRNG:  
The output of a block cipher in counter mode.  Given (a) the security of the 
encryption under standard assumptions; (b) the secrecy and randomness of k; the 
result is a good PRNG.  (In fact, this is pretty much exactly one of the 
Indistinguishability assumptions.  There are subtly different forms of those 
around, but typically the randomness of input is irrelevant - these are 
semantic security assumptions so knowing something about the input can't help 
you.)  Putting R in there can't hurt, and if the way you got R really is random 
then even if k leaks or E turns out to be weak, you're still safe.  However ... 
where does k come from?  To be able to use any of the properties of E, k itself 
must be chosen at random.  If you use the same generator as way use to find R, 
it's not clear that this is much stronger than R itself.  If 
 you have some assured way of getting a random k - why not use it for R itself? 
 (This might be worth it if you can generate a k you believe in but only at a 
much lower rate than you can generate an R directly.  Then you can stretch k 
over a number of R values.  But I'd really think long and hard about what 
you're assuming about the various components.)

BTW, one thing you *must not* do is have k and the session key relate to each 
other in any simple way.

For hash and XOR ... no standard property of any hash function tells you 
anything about the properties of R XOR H(R).  Granted, for the hash functions 
we generally use, it probably has about the same properties; but it won't have 
any more than that.  (If you look at the structure of classic iterated hashes, 
the last thing H did was compute S = S + R(S), where S was the internal state 
and R was the round function.  Since R is usually invertible, this is the only 
step that actually makes the whole thing non-invertible.  Your more-or-less 
repetition of the same operation probably neither helps more hinders.)

At least if we assume the standard properties, it's hard to get R from H(R) - 
but an attacker in a position to try a large but (to him) tractable number of 
guesses for R can readily check them all.  Using R XOR H(R) makes it no harder 
for him to try that brute force search.  I much prefer the encryption approach.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Hardware Trojan Protection

2013-09-25 Thread Bill Frantz
On 9/22/13 at 6:07 PM, leich...@lrw.com (Jerry Leichter) wrote 
in another thread:


Still, it raises the question:  If you can't trust your 
microprocessor chips, what do you do?  One possible answer:  
Build yourself a processor out of MSI chips.  We used to do 
that, not so long ago, and got respectable performance (if not, 
perhaps, on anything like today's scale).  An MSI chip doesn't 
have enough intrinsic computation to provide much of a hook for 
an attack.  Oh, sure, the hardware could be spiked - but to do 
*what*?  Any given type of MSI chip could go into many 
different points of many different circuit topologies, and 
won't see enough of the data to do much anyway.  There may be 
some interface issues:  This stuff might not be fast enough to 
deal with modern memory chips.  (How would you attack a memory 
chip?  Certainly possible if you're make a targeted attack - 
you can slip in a small processor in the design to do all kinds 
of nasty things.  But commercial of the shelf memory chips are 
built right up to the edge of what we can make, so you can't 
change a

ll that much.)

Some stuff is probably just impossible with this level of 
technology.  I doubt you can build a Gig-E Ethernet interface 
without large-scale integration.  You can certainly do the 
original 10 Mb/sec - after all, people did!  I have no idea if 
you could get to 100 Mb/sec.


Do people still make bit-slice chips?  Are they at a low-enough 
level to not be a plausible attack vector?


You could certainly build a respectable mail server this way - 
though it's probably not doing 2048-bit RSA at a usable speed.


We've been talking about crypto (math) and coding (software).  
Frankly, I, personally, have no need to worry about someone 
attacking my hardware, and that's probably true of most 
people.  But it's *not* true of everyone.  So thinking about 
how to build harder to attack hardware is probably worth the effort.


You might get a reasonable level of protection implementing the 
core of the crypto operations in a hardware security module 
(HSM) using Field Programmable Gate Arrays (FPGA) or Complex 
Programmable Logic Device (CPLD). There is an open source set of 
tools for programming these beasts based on Python called MyHDL 
www.myhdl.org. The EFF DES cracker may have some useful ideas too.


The largest of these devices are also pressing the current chip 
limits. There isn't a lot of extra space for Trojans. In 
addition, knowing what to look at is somewhat difficult if pin 
assignments etc are changed from chip to chip at random.


As with any system, there are tool chain issues. Open source 
helps, but there is always the Key Thompson attack. The best 
solution I can think of is to audit the output. Look very 
carefully at the output of the tool chain, and at the final 
piece that loads the configuration data into the device.


Cheers - Bill

---
Bill Frantz|Web security is like medicine - trying to 
do good for

408-356-8506   |an evolved body of kludges - Mark Miller
www.pwpconsult.com |

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Phillip Hallam-Baker
On Tue, Sep 24, 2013 at 10:59 AM, Jerry Leichter leich...@lrw.com wrote:

 On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker hal...@gmail.com
 wrote:
  I was thinking about this and it occurred to me that it is fairly easy
 to get a public SSL server to provide a client with a session key - just
 ask to start a session.
 
  Which suggests that maybe the backdoor [for an NSA-spiked random number
 generator] is of the form that ... you get a lot of nonces [maybe just one]
 from the random number generator ... and that allows the [next one to be
 predicted more easily or even the] seed to be unearthed.  One simple way
 [to stop this] would be to encrypt the nonces from the RNG under a secret
 key generated in some other fashion.
 
  nonce = E (R, k)
 
  Or hashing the RNG output and XORing with it
 
  nonce = R  XOR H(R)
 You shifted from random value to nonce.  Given the severe effects on
 security that using a nonce - a value that is simply never repeated in a
 given cryptographic context; it may be predictable, even fixed - to a
 random value, one needs to be careful about the language.  (There's
 another layer as well, partly captured by unpredictable value but not
 really:  Is it a value that we must plan on the adversary learning at some
 point, even though he couldn't predict it up front, or must it remain
 secret?  The random values in PFS are only effective in providing forward
 security if they remain secret forever.)

 Anyway, everything you are talking about here is *supposed* to be a random
 value.  Using E(R,k) is a slightly complicated way of using a standard
 PRNG:  The output of a block cipher in counter mode.  Given (a) the
 security of the encryption under standard assumptions; (b) the secrecy and
 randomness of k; the result is a good PRNG.  (In fact, this is pretty much
 exactly one of the Indistinguishability assumptions.  There are subtly
 different forms of those around, but typically the randomness of input is
 irrelevant - these are semantic security assumptions so knowing something
 about the input can't help you.)  Putting R in there can't hurt, and if the
 way you got R really is random then even if k leaks or E turns out to be
 weak, you're still safe.  However ... where does k come from?  To be able
 to use any of the properties of E, k itself must be chosen at random.  If
 you use the same generator as way use to find R, it's not clear that this
 is much stronger than R itself.  If you have some assured way of getting a
 random k - why not use it for R itself?  (This might be worth it if you can
 generate a k you believe in but only at a much lower rate than you can
 generate an R directly.  Then you can stretch k over a number of R
 values.  But I'd really think long and hard about what you're assuming
 about the various components.)

 BTW, one thing you *must not* do is have k and the session key relate to
 each other in any simple way.

 For hash and XOR ... no standard property of any hash function tells you
 anything about the properties of R XOR H(R).  Granted, for the hash
 functions we generally use, it probably has about the same properties; but
 it won't have any more than that.  (If you look at the structure of classic
 iterated hashes, the last thing H did was compute S = S + R(S), where S was
 the internal state and R was the round function.  Since R is usually
 invertible, this is the only step that actually makes the whole thing
 non-invertible.  Your more-or-less repetition of the same operation
 probably neither helps more hinders.)

 At least if we assume the standard properties, it's hard to get R from
 H(R) - but an attacker in a position to try a large but (to him) tractable
 number of guesses for R can readily check them all.  Using R XOR H(R) makes
 it no harder for him to try that brute force search.  I much prefer the
 encryption approach.



There are three ways a RNG can fail

1) Insufficient randomness in the input
2) Losing randomness as a result of the random transformation
3) Leaking bits through an intentional or unintentional side channel

What I was concerned about in the above was (3).

I prefer the hashing approaches. While it is possible that there is a
matched set of weaknesses, I find that implausible.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Ralph Holz
Hi,

On 09/23/2013 10:47 AM, Peter Gutmann wrote:

 I'm inclined to agree with you, but you might be interested/horrified in the
 1024 bits is enough for anyone debate currently unfolding on the TLS list:
 
 That's rather misrepresenting the situation.  It's a debate between two
 groups, the security practitioners, we'd like a PFS solution as soon as we
 can, and given currently-deployed infrastructure DH-1024 seems to be the best
 bet, and the theoreticians, only a theoretically perfect solution is
 acceptable, even if it takes us forever to get it.
 
 (You can guess from that which side I'm on).

Are you talking about the BCP? Then what you say is not true either.

1) General consensus seems to be that recommending DHE-2048 is not a
good idea in the BCP, because it will not be available now, nor in short
to mid-range time. Voices that utter different opinions are currently a
minority; the BCP authors are not among them.

2) Consequently, the BCP effort is currently on deciding whether a ECC
variant of DHE or DHE-1024 should be the recommendation. The factions
seem to be split about equally:

Pro DHE-1024:
* Some say not enough systems provide ECDHE to recommend it, and thus
DHE1024 should be the primary recommendation.
* Some say ECDHE is not trustworthy yet due to implementation
difficulties and/or NSA involvement.

Pro ECDHE:
* Others say Chrome and Firefox will soon, or already do, support ECDHE
it. That would leave only the Windows users on IE, and we know that
Windows 8.1 will support it.
* The same people acknowledge the trustworthy argument. The question
is whether it weighs heavily enough.

That seems to be a more accurate description as I understand it from
reading the list. Myself, I am currently still undecided on the issue
but tend slightly towards ECDHE for now -- with any luck, the BCP won't
be ready until we have some more data on the issue.

Ralph


-- 
Ralph Holz
I8 - Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/
Phone +49.89.289.18043
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread Kelly John Rose
On 23/09/2013 3:45 PM, John Kelsey wrote:
 It needs to be in their business interest to convince you that they *can't* 
 betray you in most ways. 
This is the most important element, and legislation that states you
cannot share that information won't be enough, especially since the
NSLs have guaranteed that it can be circumvented without any real effort.

If Google, or other similar businesses want to convince people to store
data in the cloud, they need to set up methods where the data is
encrypted or secured before it is even provided to them using keys which
are not related or signed by a central authority key. This way, even if
Google's entire system was proven to be insecure and riddled with leaks,
the data would still be secure. You cannot share data that you can never
have access to.

Albeit, from a political perspective this could be Kryptonite since less
savory types will be inclined to use your services if you can show
effectively that the data stored on your services is inaccessible even
under warrant. It will be hard to handle the public relations the first
time anyone of the standard list of think of the children! group of
criminals starts to use your services.

-- 
Kelly John Rose
Mississauga, ON
Phone: +1 647 638-4104
Twitter: @kjrose

Document contents are confidential between original recipients and sender.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Gerardus Hendricks
 So we think there is 'some kind' of backdoor in a random number
generator.
 One question is how the EC math might make that possible. Another is how
might the door be opened.

I'm assuming you're talking about DUAL_EC_DBRG. Where the backdoor is and
how it can be exploited is pretty simple to explain – if you know your way
around the elliptic curve discrete logarithm problem, which I really
don't. See [0]. Please allow me to stumble up an explanation in an attempt
to learn how this shit works.

In any case, in the algorithm as the NIST specifies it [1], two constants
are used: the generator P of the curve and a specific point on that curve
Q. It hasn't been explained why point Q has been chosen. The potential
backdoor lies in the existence of a constant e so that Q^e = P.
Calculating e, knowing only P and Q, would require solving the discrete
logarithm problem. It is however trivial to generate Q and e together,
just like in any other public-key cryptosystem (with e being the private
key).

According to the researchers from Microsoft, exploiting this would require
at most 32 bytes of the PRNG output to reveal the internal state, thus
revealing all random numbers generated with the same instantiation of the
PRNG from that moment on.

If the NSA in fact specially crafted point Q, it would have been the
perfect backdoor for them. Only they had the keys to the kingdom. As long
as the private key remained secret, other attackers wouldn't have any
advantage from the existence of the backdoor.

Would I be correct to say that backdoors with such properties cannot exist
in PRNGs based on symmetric crypto or hashing functions?

 Either way, the question is how to stop this side channel attack. One
simple way would be to encrypt the nonces from the RNG under a secret
key
 generated in some other fashion.

That seems silly. You are just shifting the responsibility from the PRNG
to the cipher and its (random) key/seed. In any case, the result is just a
new, needlessly complex PRNG, since you might just as well encrypt zeroes.
You also seem to think that evoking random numbers from a system somehow
constitutes a side channel attack. It does not. Pretending that it does
will only lead to security by obscurity (hiding the algorithm).

If you really doubt implementations, instantiate multiple algorithms with
independent seeds and XOR the output together. The combination will be at
least as strong as the strongest individual PRNG (assuming good seeds).
That seems silly as well.

Regards,
Gerard

[0] http://rump2007.cr.yp.to/15-shumow.pdf
[1] http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf




___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 24, 2013, at 7:53 PM, Phillip Hallam-Baker wrote:
 There are three ways a RNG can fail
 
 1) Insufficient randomness in the input
 2) Losing randomness as a result of the random transformation
 3) Leaking bits through an intentional or unintentional side channel
 
 What I was concerned about in the above was (3).
 
 I prefer the hashing approaches. While it is possible that there is a matched 
 set of weaknesses, I find that implausible.
Then I'm mystified by your proposal.

If enough bits are leaked to make it possible to feed all possible values of 
the generated value R into whatever algorithm uses them (and, of course, 
recognize when you've hit the right one), then the extra cost of instead 
replacing each such value R with R XOR H(R) is trivial.  No fixed 
transformation can help here - it's no different from using an RNG with problem 
1 and whitening its output:  It now looks strong, but isn't.  (In fact, in 
terms of black box behavior to someone who doesn't know about the limited 
randomness/internal loss/side channel, these three weaknesses are functionally 
equivalent - and are subject to exactly the same attacks.)

The encryption approach - replacing R by E(k,R) - helps exactly because the key 
it uses is unknown to the attacker.  As I said before, this approach is fine, 
but:  Where does this magic random key come from; and given that you have a way 
to generate it, why not use that way to generate R directly rather than playing 
games with code you don't trust?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Phillip Hallam-Baker
On Sun, Sep 22, 2013 at 2:00 PM, Stephen Farrell
stephen.farr...@cs.tcd.iewrote:



 On 09/22/2013 01:07 AM, Patrick Pelletier wrote:
  1024 bits is enough for anyone

 That's a mischaracterisation I think. Some folks (incl. me)
 have said that 1024 DHE is arguably better that no PFS and
 if current deployments mean we can't ubiquitously do better,
 then we should recommend that as an option, while at the same
 time recognising that 1024 is relatively short.


And the problem appears to be compounded by dofus legacy implementations
that don't support PFS greater than 1024 bits. This comes from a
misunderstanding that DH keysizes only need to be half the RSA length.

So to go above 1024 bits PFS we have to either

1) Wait for all the servers to upgrade (i.e. never do it because the won't
upgrade)

2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
or above'.


I suggest (2)

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread james hughes
Je n'ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la 
faire plus courte.

On Sep 23, 2013, at 12:45 PM, John Kelsey crypto@gmail.com wrote:
 On Sep 18, 2013, at 3:27 PM, Kent Borg kentb...@borg.org wrote:
 
 You foreigners actually have a really big vote here.  
 
 It needs to be in their business interest to convince you that they *can't* 
 betray you in most ways.  


Many, if not all, service providers can provide the government valuable 
information regarding their customers. This is not limited to internet service 
providers. It includes banks, health care providers, insurance companies, 
airline companies, hotels, local coffee shops, book sellers, etc. where 
providing a service results in personal information being exchanged. The US has 
no corner on the ability to get information from almost any type of service 
provider. This is the system that the entire world uses, and should not be our 
focus.

This conversation should be on the ability for honest companies to communicate 
securely to their customers. Stated differently, it is valuable that these 
service providers know the information they have given to the government. 
Google is taking steps to be transparent. What Google can not say is anything 
about the traffic that was possibly decrypted without Google's knowledge.

Many years ago (1995?), I personally went to a Swiss bank very well known for 
their high levels of security and their requirement that -all- data leaving 
their datacenter, in any form (including storage), must be encrypted. I asked 
the chief information security officer of the bank if he would consider using 
Clipper enabled devices -if- the keys were escrowed by the Swiss government. 
His answer was both unexpected and still echoes with me today. He said We have 
auditors crawling all over the place. All the government has to do is to 
[legally] ask and they will be given what they ask for. There is absolutely no 
reason for the government to access our network traffic without our knowledge. 
We ultimately declined to implement Clipper.

Service providers are, and will always be, required to respond to legal 
warrants. A company complying with a warrant knows what they provided. They can 
fight the warrants, they can lobby their government, they can participate in 
the discussion (even if that participation takes place behind closed doors). 

The real challenge facing us at the moment is to restore confidence in the 
ability of customers to privately communicate with their service providers and 
for service providers to know the full extent of the information they are 
providing governments. 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] forward-secrecy =2048-bit in legacy browser/servers? (Re: RSA equivalent key length/strength)

2013-09-25 Thread Adam Back

On Wed, Sep 25, 2013 at 11:59:50PM +1200, Peter Gutmann wrote:

Something that can sign a new RSA-2048 sub-certificate is called a CA.  For
a browser, it'll have to be a trusted CA.  What I was asking you to explain is
how the browsers are going to deal with over half a billion (source: Netcraft
web server survey) new CAs in the ecosystem when websites sign a new RSA-2048
sub-certificate.


This is all ugly stuff, and probably  3072 bit RSA/DH keys should be
deprecated in any new standard, but for the legacy work-around senario to
try to improve things while that is happening:

Is there a possibility with RSA-RSA ciphersuite to have a certified RSA
signing key, but that key is used to sign an RS key negotiation?

At least that was how the export ciphersuites worked (1024+ bit RSA auth,
512-bit export-grade key negotation).  And that could even be weakly forward
secret in that the 512bit RSA key could be per session.  I imagine that
ciphersuite is widely disabled at this point.

But wasnt there also a step-up certificate that allowed stronger keys if the
right certificate bits were set (for approved export use like banking.)
Would setting that bit in all certificates allow some legacy server/browsers
to get forward secrecy via large, temporary key negotiation only RSA keys? 


(You have to wonder if the 1024-bit max DH standard and code limits was bit
of earlier sabotage in itself.)

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-25 Thread Alan Braggins
On 24 September 2013 17:01, Jerry Leichter leich...@lrw.com wrote:
 On Sep 23, 2013, at 4:20 AM, ianG i...@iang.org wrote:

 ...  But they made Dual EC DRBG the default ...

 At the time this default was chosen (2005 or thereabouts), it was *not* a 
 mistake.

https://www.schneier.com/blog/archives/2007/11/the_strange_sto.html
  Problems with Dual_EC_DRBG were first described in early 2006

With hindsight, it was definitely a mistake. The questions are whether
they could or should
have known it was a mistake at the time and whether the NSA played any
part in the mistake,
and whether they should have warned users and changed the default well
before now.

-- 
alan.bragg...@gmail.com
http://www.chiark.greenend.org.uk/~armb/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread John Kelsey
On Sep 25, 2013, at 2:52 AM, james hughes hugh...@mac.com wrote:

 Many, if not all, service providers can provide the government valuable 
 information regarding their customers. This is not limited to internet 
 service providers. It includes banks, health care providers, insurance 
 companies, airline companies, hotels, local coffee shops, book sellers, etc. 
 where providing a service results in personal information being exchanged. 
 The US has no corner on the ability to get information from almost any type 
 of service provider. This is the system that the entire world uses, and 
 should not be our focus.

There are many places where there is no way to provide the service without 
having access to the data, and probably storing it.  For those places, we are 
stuck with legal and professional and business safeguards.  You doctor should 
take notes when you see him, and can be compelled to give those notes up if he 
can access them to (for example) respond to a phone call asking to refill your 
medications.  There are rather complicated mechanisms you can imagine to 
protect your privacy in this situation, but it's hard to imagine them working 
well in practice.  For that situation, what we want is that the access to the 
information is transparent--the doctor can be compelled to give out information 
about his patients, but not without his knowledge, and ideally not without your 
knowledge.  

But there are a lot of services which do not require that the providers have or 
collect information about you.  Cloud storage and email services don't need to 
have access to the plaintext data you are storing or sending with them.  If 
they have that information, they are subject to being forced to share it with a 
government, or deciding to share it with someone for their own business 
reasons, or having a dishonest employee steal it.  If they don't have that 
information because their service is designed so they don't have it, then they 
can't be forced to share it--whether with the FBI or the Bahraini government or 
with their biggest advertiser.  No change of management or policy or  law can 
make them change it.  

Right now, there is a lot of interest in finding ways to avoid NSA 
surveillance.  In particular, Germans and Brazilians and Koreans would 
presumably rather not have their data made freely available to the US 
government under what appear to be no restrictions at all.  If US companies 
would like to keep the business of Germans and Brazilians and Koreans, they 
probably need to work out a way to convincingly show that they will safeguard 
that data even from the US government.   

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread Eugen Leitl
On Tue, Sep 24, 2013 at 12:30:40PM -0400, Kelly John Rose wrote:

 If Google, or other similar businesses want to convince people to store
 data in the cloud, they need to set up methods where the data is
 encrypted or secured before it is even provided to them using keys which

That would completely undermine their free (selling their customers
as a service) model. For privacy-minded, the centralist cloud model 
seems to be irreversibly dead. P2P clouds are currently too unreliable
unfortunately. What we need is end to end reachability (IPv6) and
sufficient upstream for residential connections, all running on low-power
no-movable-part systems (embedded/SoCs). Most of that is still in
our future. 

 are not related or signed by a central authority key. This way, even if
 Google's entire system was proven to be insecure and riddled with leaks,
 the data would still be secure. You cannot share data that you can never
 have access to.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread ianG

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit 
of a good enough security delivered to more people.


We're talking multiple orders of magnitude here.  The math that counts is:

   Security = Users * Protection.



iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-25 Thread ianG

Hi Jerry,

I appreciate the devil's advocate approach here, it has helped to get my 
thoughts in order!  Thanks!


My conclusion is:  avoid all USA, Inc, providers of cryptographic 
products.  Argumentation follows...



On 24/09/13 19:01 PM, Jerry Leichter wrote:

On Sep 23, 2013, at 4:20 AM, ianG i...@iang.org wrote:

RSA today declared its own BSAFE toolkit and all versions of its
Data Protection Manager insecure...


Etc.  Yes, we expect the company to declare itself near white, and the press to 
declare it blacker than the ace of spaces.

Meanwhile, this list is about those who know how to analyse this sort of stuff, 
independently.  So...

Indeed.


...  But they made Dual EC DRBG the default ...


I don't see a lot of distance between choosing Dual_EC as default, and the 
conclusion that BSAFE  user-systems are insecure.

The conclusion it leads to is that *if used in the default mode*, it's (well, 
it *may be*) unsafe.



Well, defaults being defaults, we can assume most people have left it in 
default mode.  I suppose we could ask for research on this question, but 
I'm going to guess:  most.  Therefore we could say that BSAFE is 
mostly unsafe, but as we don't know who is using it in default mode, 
I'm sure most cryptography people would agree that means unsafe, period.




We know no more today about the quality of the implementation than we did 
yesterday.  (In fact, while I consider it a weak argument ... if NSA had 
managed to sneak something into the code making it insecure, they wouldn't have 
needed to make a *visible* change - changing the default.  So perhaps we have 
better reason to believe the rest of the code is OK today than we did 
yesterday.)



Firstly, this is to suggest that quality of implementation is the issue. 
 It isn't, the issue is whether the overall result is safe -- to 
end-users.  In this case, it could be fantastic code, but if the RNG is 
spiked, then the fantastic code is approx. worthless.


Reminds me of what the IRA said after nearly knocking off Maggie Thatcher:

Today we were unlucky, but remember we only have to be lucky once.
You will have to be lucky always.

Secondly, or more widely, if the NSA has targetted RSA, then what can we 
conclude about quality of the rest of the implementation?  We can only 
make arguments about the rest of the system if we assume this was a 
one-off.  That would be a surprising thing to assume, given what else we 
know.




The question that remains is, was it an innocent mistake, or were they 
influenced by NSA?

a)  How would knowing this change the actions you take today?



* knowing it was an innocent mistake:  well, everyone makes them, even 
Debian.  So perhaps these products aren't so bad?


* knowing it was an influenced result:   USA corporations are to be 
avoided as cryptographic suppliers.  E.g., JCE, CAPI, etc.


Supporting assumptions:

1. assume the NSA is your threat model.  Once upon a time those 
threatened were a small group of neerdowellers in far flung wild 
countries with exotic names.  Unfortunately, this now applies to most 
people -- inside the USA, anyone who's facing a potential criminal 
investigation by any of the USA agencies, due to the DEA trick.  So most 
of Wall Street, etc, and anyone who's got assets attachable for ML, in 
post-WoD world, etc.  Outside the USA, anyone who's 2 handshakes from 
any neerdowellers.


2. We don't as yet have any such evidence from non-USA corps, do we? 
(But I ain't putting my money down on that...)


3. Where goes RSA, also follows Java's JCE (recall Android) and CAPI. 
How far behind are the rest?


http://www.theregister.co.uk/2013/09/19/linux_backdoor_intrigue

4. Actually, we locals on this list already knew this to a reasonable 
suspicion.  But now we have a chain of events that allows a reasonable 
person outside the paranoiac security world to conclude that the NSA has 
corrupted the cryptography delivery from a USA corp.


http://financialcryptography.com/mt/archives/001446.html



b)  You've posed two alternatives as if they were the only ones.  At the time this default was 
chosen (2005 or thereabouts), it was *not* a mistake.  Dual EC DRBG was in a 
just-published NIST standard.  ECC was hot as the best of the new stuff - with 
endorsements not just from NSA but from academic researchers.  Dual EC DRBG came with a self-test 
suite, so could guard itself against a variety of attacks and other problems.  Really, the only 
mark against it *at the time* was that it was slower than the other methods - but we've learned 
that trading speed for security is not a good way to go, so that was not dispositive.



True, 2005 or thereabouts, such a story could be and was told, and we 
can accept for the sake of argument it might not have been a mistake 
given what they knew.


That ended 2007.  RSA was no doubt informed of the results as they 
happened, because they are professionals, now conveniently listed out by 
Mathew Greene:



Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Peter Gutmann
Stephen Farrell stephen.farr...@cs.tcd.ie writes:

That's a mischaracterisation I think. Some folks (incl. me) have said that
1024 DHE is arguably better that no PFS and if current deployments mean we
can't ubiquitously do better, then we should recommend that as an option,
while at the same time recognising that 1024 is relatively short.

+1.

Peter.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Peter Gutmann
Peter Fairbrother zenadsl6...@zen.co.uk writes:
On 24/09/13 05:27, Peter Gutmann wrote:
 Peter Fairbrother zenadsl6...@zen.co.uk writes:
 If you just want a down-and-dirty 2048-bit FS solution which will work 
 today,
 why not just have the websites sign a new RSA-2048 sub-certificate every 
 day?
 Or every few hours? And delete the secret key, of course.

 ... and I guess that puts you firmly in the theoretical/impractical camp.
 Would you care to explain how this is going to work within the TLS protocol?

I'm not sure I understand you.

Something that can sign a new RSA-2048 sub-certificate is called a CA.  For 
a browser, it'll have to be a trusted CA.  What I was asking you to explain is 
how the browsers are going to deal with over half a billion (source: Netcraft 
web server survey) new CAs in the ecosystem when websites sign a new RSA-2048 
sub-certificate.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Hardware Trojan Protection

2013-09-25 Thread Lodewijk andré de la porte
2013/9/24 Bill Frantz fra...@pwpconsult.com

 Field Programmable Gate Arrays (FPGA)


Yeah, those are definitely probably reflashable more easily than you'd
like. They're a bit more tricky than they'd seem to be at first. Definitely
a better choice than Intel though. On the todo list.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA recommends against use of its own products.

2013-09-25 Thread Kristian Gjøsteen
24. sep. 2013 kl. 18:01 skrev Jerry Leichter leich...@lrw.com:

 At the time this default was chosen (2005 or thereabouts), it was *not* a 
 mistake.  Dual EC DRBG was in a just-published NIST standard.  ECC was 
 hot as the best of the new stuff - with endorsements not just from NSA but 
 from academic researchers.

Choosing Dual-EC-DRBG has been a mistake for its entire lifetime, because it is 
so slow.

While some reasonable people seem to have a preference for cryptography based 
on number theory, I've never met anyone who would actually use Dual-EC-DRBG. 
(Blum-Blum-Shub-fanatics show up all the time, but they are all nutcases.)

I claim that RSA was either malicious, easily fooled or incompetent to use the 
generator. I will not buy anything from RSA in the future. Were I using RSA 
products or services, I would find replacements.

(For what it's worth, I discounted the press reports about a trapdoor in 
Dual-EC-DRBG because I didn't think anyone would be daft enough to use it. I 
was wrong.)

-- 
Kristian Gjøsteen



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Gilmore response to NSA mathematician's make rules for NSA appeal

2013-09-25 Thread Anne Lynn Wheeler

We had been asked to come in and help wordsmith the cal. state digital signature act. Several of 
the parties were involved in privacy issues and also working on Cal. data breach notification act 
and Cal. opt-in personal information sharing act. The parties had done extensive public surveys on 
privacy and the #1 issue was identity theft, namely the form of account fraud as result 
of data breaches. There was little or nothing being done about this so there was some hope that the 
publicity from the breach notifications would motivate corrective action. The issue is that 
normally an entity takes security and countermeasures in self-protection ... the entities suffering 
the data breaches weren't at risk ... it is the account holders. Since then several Federal breach 
notification bills have been introduced about evenly divided between having similar notification 
requirements and Federal preemption legislation eliminating requirement for 
notifications. The federal bills elimina
ting noti
fications cite industry specifications call for account encryption (that were 
formulated after the cal. legislation). We've periodically commented in the 
current paradigm, even if the planet was buried under miles of information 
hiding encryption it still wouldn't stop information leakage. One problem, is 
account information is basically used for authentication and as such needs to 
be kept completely confidential and never divulged. However, at the same time, 
account information is also required in dozens of business processes at 
millions of location around the world.

The cal.personal information opt-in sharing legislation would require institution have record from the 
individual authorizing sharing of information. However, before the cal legislation passed, an opt-out 
(federal preemption) provision was added to GLBA. GLBA is now better known for the repeal of Glass-Steagall. At the 
time, the rhetoric in congress was the primary purpose of GLBA was if you already had bank charter you got to keep it, 
however, if you didn't have a charter, you wouldn't be able to get one (i.e. eliminate new parties from coming in and 
competing with banks). However, GLBA was loaded up with other features like repeal of Glass-Steagall and the 
opt-out personal information sharing (i.e. the financial institution needed record of person declining 
sharing of personal information ... rather than opt-in which required institution to have record 
authorizing sharing).

A few years ago, I was at a national annual privacy conference in Wash DC. (hotel just up the 
street from spy museum). There was a panel discussion with the FTC commissioners. Somebody in the 
audience asked the FTC commissioners if they were going to do anything about GLBA 
opt-out privacy sharing. He said he worked on callcenter technology used by all the 
major financial institutions ... and that none of the 1-800 opt-out desks had 
provisions for recording information from the call (aka an institution would *NEVER* have a record 
of a person objecting to sharing their personal information). The FTC commissioners just ignored 
him.

--
virtualization experience starting Jan1968, online at home since Mar1970
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 24, 2013, at 6:11 PM, Gerardus Hendricks konfku...@riseup.net wrote:
 I'm assuming you're talking about DUAL_EC_DBRG. ... According to the 
 researchers from Microsoft, exploiting this would require
 at most 32 bytes of the PRNG output to reveal the internal state, thus
 revealing all random numbers generated with the same instantiation of the
 PRNG from that moment on.  Would I be correct to say that backdoors with such 
 properties cannot exist in PRNGs based on symmetric crypto or hashing 
 functions?
Well, that depends on how they are used and what you believe about the 
primitives.

If you use encryption in counter mode - E(k,counter), where k is random - then 
the assumption that the generated values are random is, as I remarked in 
another comment, pretty much equivalent to the indistinguishability assumptions 
that are commonly made about symmetric cryptographic algorithms.  If you don't 
think you have an appropriately secure symmetric cipher to work with ... it's 
not clear just what you're going to *do* with your random numbers anyway.

It's harder to know what to make of hashing approaches because it depends on 
the hash algorithm and what you believe about *it*.  For most uses, a 
cryptographic hash function just has to prevent first- and second-preimage 
attacks.  If that's all you are willing to assume your hash function provides, 
it's enough for the standard, intended uses of such hashes, but not enough to 
prove much more.  (For example, nothing in these two assumptions, on its own, 
says that the hash function can't always produce an output whose first bit is 
0.)  People generally do assume more, but you really have to be explicit.

 Either way, the question is how to stop this side channel attack. One
 simple way would be to encrypt the nonces from the RNG under a secret
 key
 generated in some other fashion.
 
 That seems silly. You are just shifting the responsibility from the PRNG
 to the cipher and its (random) key/seed. In any case, the result is just a
 new, needlessly complex PRNG, since you might just as well encrypt zeroes.
Indeed - though you could safely reuse the random key in counter mode up to 
some limit determined by the security guarantees of the cipher, and if the 
encryption function lives up to its guarantees, you'll still be secure.  
Stretching a short random key into a long effectively random sequence is 
exactly what symmetric encryption functions *do*!
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Nico Williams
On Sep 25, 2013 8:06 AM, John Kelsey crypto@gmail.com wrote:
 On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
  Either way, the question is how to stop this side channel attack.
  One simple way would be to encrypt the nonces from the RNG under a
  secret key generated in some other fashion.
 
  nonce = E (R, k)

 This would work if you had a secure key I couldn't guess for k.  If
 the entropy is really low, though, I would still see duplicate outputs
 from time to time.  If the RNG has short cycles, this would also show
 up.

Note that Kerberos confounds: it encrypts it's nonces for AES in CTS
mode (similar to CBC).  Confounding makes it harder to exploit a
backdoored RNG if the exploit is made easier by the ability to see RNG
outputs as nonces.  I'm not sure how much harder though: presumably in
the worst case the attacker has the victim's device's seed somehow
(e.g., from a MAC address, purchase records, ...), and can search its
output via boot and iteration counter searches (the details depend on
the PRNG construction, obviously).  Seeing an RNG output in the clear
probably helps, but the attacker could design the PRNG such that they
don't need to.

Now, there's a proposal to drop confounding for new cipher suites in
Kerberos.  Among other things doing so would improve performance.  It
would also make analysis of the new cipher suites easier, as they'd
match what other standard protocols do.

Of course, I'd rather implementations have a strong enough RNG and SRNG
-- I'd rather not have to care if some RNG outputs are trivially
available to attackers.  But if confounding is a net security
improvement for PRNG-only use cases (is it? it might depend on the PRNG
construction and boot-time seed handling), maybe we should keep it.

Thoughts?

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography