Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.

2013-10-14 Thread Jerry Leichter
On Oct 13, 2013, at 1:04 PM, Ray Dillinger wrote:
 This is despite meeting (for some inscrutable definition of meeting)
 FIPS 140-2 Level 2 and Common Criteria standards.  These standards
 require steps that were clearly not done here.  Yet, validation
 certificates were issued.
 
 This is a misunderstanding of the CC certification and FIPS validation 
 processes:
 
 the certificates were issued *under the condition* that the software/system 
 built on it uses/implements the RNG tests mandated. The software didn't, 
 invalidating the results of the certifications.
 
 Either way, it boils down to tests were supposed to be done or conditions
 were supposed to be met, and producing the darn cards with those 
 certifications
 asserted amounts to stating outright that they were, and yet they were not.
 
 All you're saying here is that the certifying agencies are not the ones
 stating outright that the tests were done.
How could they?  The certification has to stop at some point; it can't trace 
the systems all the way to end users.  What was certified as a box that would 
work a certain way given certain conditions.  The box was used in a different 
way.  Why is it surprising that the certification was useless?  Let's consider 
a simple encryption box:  Key goes in top, cleartext goes in left; ciphertext 
comes out right.  There's an implicit assumption that you don't simply discard 
the ciphertext and send the plaintext on to the next subsystem in line.  No 
certification can possibly check that; or that, say, you don't post all your 
keys on your website immediately after generating them.

  I can accept that, but it does
 not change the situation or result, except perhaps in terms of the placement
 of blame. I *still* hope they bill the people responsible for doing the tests
 on the first generation of cards for the cost of their replacement.
That depends on what they were supposed to test, and whether they did test that 
correctly.  A FIPS/Common Criteria Certification is handed a box implementing 
the protocol and a whole bunch of paperwork describing how it's designed, how 
it works internally, and how it's intended to be used.  If it passes, what 
passes it the exact design certified, used as described.  There are way too 
many possible system built out of certified modules for it to be reasonable to 
expect the certification to encompass them all.

I will remark that, having been involved in one certification effort, I think 
they offer little, especially for software - they get at some reasonable issues 
for hardware designs.  Still, we don't currently have much of anything better.  
Hundreds of eyeballs may have been on the Linux code, but we still ended up 
fielding a system with a completely crippled RNG and not noticing for months.  
Still, if you expect the impossible from a process, you make any improvement 
impossible.  Formal verification, where possible, can be very powerful - but it 
will also have to focus on some well-defined subsystem, and all the effort will 
be wasted if the subsystem is used in a way that doesn't meet the necessary 
constraints.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Jerry Leichter
On Oct 11, 2013, at 11:09 PM, James A. Donald wrote:
 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.
 
 The problem is that layering creates round trips, and as cpus get ever 
 faster, and pipes ever fatter, round trips become a bigger an bigger problem. 
  Legend has it that each additional round trip decreases usage of your web 
 site by twenty percent, though I am unaware of any evidence on this.
The research is on time delays, which you could easily enough convert to round 
trips.  The numbers are nowhere near 20%, but are significant if you have many 
users:  http://googleresearch.blogspot.com/2009/06/speed-matters.html

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-11 Thread Jerry Leichter
On Oct 11, 2013, at 11:26 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
 Quick question, anyone got a good scheme for key stretching?
 
 I have this scheme for managing private keys that involves storing them as 
 encrypted PKCS#8 blobs in the cloud.
 
 AES128 seems a little on the weak side for this but there are (rare) 
 circumstances where a user is going to need to type in the key for recovery 
 purposes so I don't want more than 128 bits of key to type in (I am betting 
 that 128 bits is going to be sufficient to the end of Moore's law).
 
 
 So the answer is to use AES 256 and stretch the key, but how? I could just 
 repeat the key:
 
 K = k + k
 
 Related key attacks make me a little nervous though. Maybe:
The related key attacks out there require keys that differ in a couple of bits. 
 If k and k' aren't related, k+k and k'+k' won't be either.

 K = (k + 01234567) XOR SHA512 (k)
Let's step back a moment and think about attacks:

1.  Brute force.  No public key-stretching algorithm can help, since the 
attacker will brute-force the k's, computing the corresponding K's as he goes.
2.  Analytic attack against AES128 that doesn't extend, in general, to AES256.  
Without knowing the nature of the attack, it's impossible to estimate whether 
knowing that the key has some particular form would allow the attack to extend. 
If so ... what forms?
3.  Analytic attack against AES256.  A recognizable form for keys - e.g., k+k - 
might conceivably help, but it seems like a minor thing.

Realistically, k+k, or k padded with 0's, or SHA256(k), are probably equally 
strong except under any attacks specifically concocted to target them (e.g., 
suppose it turns out that there just happens to be an analytic attack against 
AES256 for keys with more than 3/4's of the bits equal to 0).

Since you're describing a situation in which performance is not an issue, you 
might as well use SHA256(k) - whitening the key can't hurt.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Jerry Leichter
On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
 Very silly but trivial to implement so I went ahead and did so:
 
 To send a prism-proof email, encrypt it for your recipient and send it
 to irrefrangi...@mail.unipay.nl
Nice!  I like it.

A couple of comments:

1.  Obviously, this has scaling problems.  The interesting question is how to 
extend it while retaining the good properties.  If participants are willing to 
be identified to within 1/k of all the users of the system (a set which will 
itself remain hidden by the system), choosing one of k servers based on a hash 
of the recipient would work.  (A concerned recipient could, of course, check 
servers that he knows can't possibly have his mail.)  Can one do better?

2.  The system provides complete security for recipients (all you can tell 
about a recipient is that he can potentially receive messages - though the 
design has to be careful so that a recipient doesn't, for example, release 
timing information depending on whether his decryption succeeded or not).  
However, the protection is more limited for senders.  A sender can hide its 
activity by simply sending random messages, which of course no one will ever 
be able to decrypt.  Of course, that adds yet more load to the entire system.

3.  Since there's no acknowledgement when a message is picked up, the number of 
messages in the system grows without bound.  As you suggest, the service will 
have to throw out messages after some time - but that's a blind process which 
may discard a message a slow receiver hasn't had a chance to pick up while 
keeping one that was picked up a long time ago.  One way around this, for 
cooperative senders:  When creating a message, the sender selects a random R 
and appends tag Hash(R).  Anyone may later send a you may delete message R 
message.  A sender computes Hash(R), finds any message with that tag, and 
discards it.  (It will still want to delete messages that are old, but it may 
be able to define old as a larger value if enough of the senders are 
cooperative.)

Since an observer can already tell who created the message with tag H(R), it 
would normally be the original sender who deletes his messages.  Perhaps he 
knows they are no longer important; or perhaps he received an application-level 
acknowledgement message from the recipient.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-09 Thread Jerry Leichter
On Oct 8, 2013, at 6:10 PM, Arnold Reinhold wrote:

 
 On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote:
 
 On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make 
 the number of AES (and maybe KDF) rounds a negotiated parameter.  Given 
 that x86 and ARM now have AES round instructions, other cipher algorithms 
 are unlikely to catch up in performance in the foreseeable future, even 
 with an higher AES round count. Increasing round count is effortless 
 compared to deploying a new cipher algorithm, even if provision is made the 
 protocol. Dropping such provisions (at least in new designs) simplifies 
 everything and simplicity is good for security.
 That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
 round instructions (or the round key computations instructions) have been 
 spiked to leak information in some non-obvious way - e.g., they cause a 
 power glitch that someone with the knowledge of what to look for can use to 
 read of some of the key bits.  The round key computation instructions 
 obviously have direct access to the actual key, while the round computation 
 instructions have access to the round keys, and with the standard round 
 function, given the round keys it's possible to determine the actual key.
 
 If, on the other hand, you use a cryptographically secure transformation 
 from key to round key, and avoid the built-in round key instructions 
 entirely; and you use CTR mode, so that the round computation instructions 
 never see the actual data; then AES round computation functions have nothing 
 useful to leak (unless they are leaking all their output, which would 
 require a huge data rate and would be easily noticed).  This also means that 
 even if the round instructions are implemented in software which allows for 
 side-channel attacks (i.e., it uses an optimized table instruction against 
 which cache attacks work), there's no useful data to *be* leaked.
 
 At least in the Intel AES instruction set, the encode and decode instruction 
 have access to each round key except the first. So they could leak that data, 
 and it's at least conceivable that one can recover the first round key from 
 later ones (perhaps this has been analyzed?).  Knowing all the round keys of 
 course enables one to decode the data.  Still, this greatly increases the 
 volume o data that must be leaked and if any instructions are currently 
 spiked, it is most likely the round key generation assist instruction. One 
 could include an IV in the initial hash, so no information could be gained 
 about the key itself.  This would work with AES(KDF(key+IV)) as well, 
 however. 
 
 
 So this is a mode for safely using possibly rigged hardware.  (Of course 
 there are many other ways the hardware could be rigged to work against you.  
 But with their intended use, hardware encryption instructions have a huge 
 target painted on them.)
 
 Of course, Keccak itself, in this mode, would have access to the real key.  
 However, it would at least for now be implemented in software, and it's 
 designed to be implementable without exposing side-channel attacks.
 
 There are two questions that need to be looked at:
 
 1.  Is AES used with (essentially) random round keys secure?  At what level 
 of security?  One would think so, but this needs to be looked at carefully.
 
 The fact that the round keys are simply xor'd with the AES state at the start 
 of each round suggest this likely secure. One would have to examine the KDF 
 to make sure the there is nothing comparable to the related key attacks on 
 the AES key set up. 
 
 2.  Is the performance acceptable?
 
 The comparison would be to AES(KDF(key)). And in how many applications is key 
 agility critical?
 
 
 BTW, some of the other SHA-3 proposals use the AES round transformation as a 
 primitive, so could also potentially be used in generating a secure round 
 key schedule.  That might (or might not) put security-critical information 
 back into the hardware instructions.
 
 If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
 implementation (the inner transformation that is the basis of each Keeccak 
 round) at some point.  Could that be used in a way that doesn't give it the 
 ability to leak critical information?
   -- Jerry
 
 
 Given multi-billion transistor CPU chips with no means to audit them, It's 
 hard to see how they can be fully trusted.
 
 Arnold Reinhold

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Jerry Leichter
On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
 If we can't select ciphersuites that we are sure we will always be 
 comfortable with (for at least some forseeable lifetime) then we urgently 
 need the ability to *stop* using them at some point.  The examples of MD5 
 and RC4 make that pretty clear.
 Ceasing to use one particular encryption algorithm in something like SSL/TLS 
 should be the easiest case--we don't have to worry about old 
 signatures/certificates using the outdated algorithm or anything.  And yet 
 we can't reliably do even that.
 
 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have a 
 limited life so they can include update in their installation planning.
This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(

There are embedded systems that are impractical to update and have expected 
lifetimes measured in decades.  RFID chips include cryptography, are completely 
un-updatable, and have no real limit on their lifetimes - the percentage of the 
population represented by any given vintage of chips will drop continuously, 
but it will never go to zero.  We are rapidly entering a world in which devices 
with similar characteristics will, in sheer numbers, dominate the ecosystem - 
see the remote-controllable Phillips Hue light bulbs 
(http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:  
http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.  
The response from Phillips to that article says In developing Hue we have used 
industry standard encryption and authentication techni
 ques  [O]ur main advice to customers is that they take steps to ensure 
they are secured from malicious attacks at a network level.

Even in the PC world, where updates are a part of life, makers eventually stop 
producing them for older products.  Windows XP, as of about 10 months ago, was 
running on 1/4 of all PC's - many 100's of millions of PC's.  About 9 months 
from now, Microsoft will ship its final security update for XP.  Many perfectly 
good PC's will stay on XP forever because even if there was the will and staff 
to upgrade, recent versions of Windows won't run on their hardware.

In the Mac world, hardware in general tends to live longer, and there's plenty 
of hardware still running that can't run recent OS's.  Apple pretty much only 
does patches for at most 3 versions of the OS (with a new version roughly every 
year).  The Linux world isn't really much different except that it's less 
likely to drop support for old hardware, and because it tends to be used by a 
more techie audience who are more likely to upgrade, the percentages probably 
look better, at least for PC's.  (But there are antique versions of Linux 
hidden away in all kinds of appliances that no one ever upgrades.)

I'm afraid the reality is that we have to design for a world in which some 
devices will be running very old versions of code, speaking only very old 
versions of protocols, pretty much forever.  In such a world, newer devices 
either need to shield their older brethren from the sad realities or relegate 
them to low-risk activities by refusing to engage in high-risk transactions 
with them.  It's by no means clear how one would do this, but there really 
aren't any other realistic alternatives.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Always a weakest link

2013-10-08 Thread Jerry Leichter
The article is about security in the large, not cryptography specifically, but 
http://www.eweek.com/security/enterprises-apply-wrong-policies-when-blocking-cloud-sites-says-study.html
 points out that many companies think that they are increasing their security 
by blocking access to sites they consider risky - only to have their users 
migrate to less well known sites doing the same thing - and those less well 
known sites are often considerably riskier.

My favorite quote:  One customer found a user who sent out a million tweets in 
a day, but in reality, its compromised systems were exporting data 140 
characters at a time via the tweets.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
I also found the argument here unconvincing.  After all, Keccak restricted to 
the set of strings of the form M|suffix reveals that it's input ends with 
suffix, which the original Keccak did not.  The problem is with the vague 
nature of no security problem.

To really get at this, I suspect you have to make some statement saying that 
your expectation about last |suffix| bits of the output is the same before and 
after you see the Keccak output, given your prior expectation about those bits. 
 But of course that's clearly the kind of statement you need *in general*:  
Keccak(Hello world) is some fixed value, and if you see it, your expectation 
that the input was Hello world will get close to 1 as you receive more output 
bits!

 In other words, I have to also make an argument about the nature of
 the suffix and how it can't have been chosen s.t. it influences the
 output in a useful way.
If the nature of the suffix and how it's chosen could affect Keccak's output in 
some predictable way, it would be secure.  Keccak's security is defined in 
terms of indistinguishability from a sponge with the same internal construction 
but a random round function (chosen from some appropriate class).  A random 
function won't show any particular interactions with chosen suffixes, so Keccak 
had better not either.

 I suspect I should agree with the conclusion, but I can't agree with
 the reasoning.
Yes, it would be nice to see this argued more fully.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
 One thing that seems clear to me:  When you talk about algorithm flexibility 
 in a protocol or product, most people think you are talking about the ability 
 to add algorithms.  Really, you are talking more about the ability to 
 *remove* algorithms.  We still have stuff using MD5 and RC4 (and we'll 
 probably have stuff using dual ec drbg years from now) because while our 
 standards have lots of options and it's usually easy to add new ones, it's 
 very hard to take any away.  
Q.  How did God create the world in only 6 days?
A.  No installed base.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 6, 2013, at 11:41 PM, John Kelsey wrote:
 ...They're making this argument by pointing out that you could simply stick 
 the fixed extra padding bits on the end of a message you processed with the 
 original Keccak spec, and you would get the same result as what they are 
 doing.  So if there is any problem introduced by sticking those extra bits at 
 the end of the message before doing the old padding scheme, an attacker could 
 have caused that same problem on the original Keccak by just sticking those 
 extra bits on the end of messages before processing them with Keccak.  
This style of argument makes sense for encryption functions, where it's a 
chosen plaintext attack, since the goal is to determine the key.  But it makes 
no sense for a hash function:  If the attacker can specify something about the 
input, he ... knows something about the input!  You need to argue that he knows 
*no more than that* after looking at the output than he did before.

While both Ben and I are convinced that in fact the suffix can't affect 
security, the *specific wording* doesn't really give an argument for why.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Universal security measures for crypto primitives

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 1:43 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Given the recent debate about security levels for different key sizes, the
 following paper by Lenstra, Kleinjung, and Thome may be of interest:
 
  Universal security from bits and mips to pools, lakes and beyond
  http://eprint.iacr.org/2013/635.pdf  
 
 From now on I think anyone who wants to argue about resistance to NSA attack
 should be required to rate their pet scheme in terms of
 neerslagverdampingsenergiebehoeftezekerheid (although I'm tempted to suggest
 the alternative tausendliterbierverdampfungssicherheit, it'd be too easy to
 cheat on that one).

While the paper is a nicely written joke, it does get at a fundamental point:  
We are rapidly approaching *physical* limits on cryptographically-relevant 
computations.

I've mentioned here in the past that I did a very rough, back-of-the envelope 
estimate of the ultimate limits on computation imposed by quantum mechanics.  I 
decided to ask a friend who actually knows the physics whether a better 
estimate was possible.  I'm still working to understand what he described, but 
here's the crux:  Suppose you want an answer to your computation within 100 
years.  Then your computations must fall in a sphere of space-time that has 
spatial radius 100 light years and time radius 100 years.  (This is a gross 
overestimate, but we're looking for an ultimate bound so why not keep the 
computation simple.)  Then:  ...fundamental limits will let you make about 
3*10^94 ~ 2^315 [bit] flips and store about 2^315 bits, in your century / 
light-century sphere.  Note that this gives you both a limit on computation 
(bit flips) and a limit on memory (total bits), so time/memory tradeoffs are 
accounted for.

This is based on the best current understanding we have of QM.  Granted, things 
can always change - but any theory that works even vaguely like the way QM 
works will impose *some* such limit.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make the 
 number of AES (and maybe KDF) rounds a negotiated parameter.  Given that x86 
 and ARM now have AES round instructions, other cipher algorithms are unlikely 
 to catch up in performance in the foreseeable future, even with an higher AES 
 round count. Increasing round count is effortless compared to deploying a new 
 cipher algorithm, even if provision is made the protocol. Dropping such 
 provisions (at least in new designs) simplifies everything and simplicity is 
 good for security.
That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
round instructions (or the round key computations instructions) have been 
spiked to leak information in some non-obvious way - e.g., they cause a power 
glitch that someone with the knowledge of what to look for can use to read of 
some of the key bits.  The round key computation instructions obviously have 
direct access to the actual key, while the round computation instructions have 
access to the round keys, and with the standard round function, given the round 
keys it's possible to determine the actual key.

If, on the other hand, you use a cryptographically secure transformation from 
key to round key, and avoid the built-in round key instructions entirely; and 
you use CTR mode, so that the round computation instructions never see the 
actual data; then AES round computation functions have nothing useful to leak 
(unless they are leaking all their output, which would require a huge data rate 
and would be easily noticed).  This also means that even if the round 
instructions are implemented in software which allows for side-channel attacks 
(i.e., it uses an optimized table instruction against which cache attacks 
work), there's no useful data to *be* leaked.

So this is a mode for safely using possibly rigged hardware.  (Of course there 
are many other ways the hardware could be rigged to work against you.  But with 
their intended use, hardware encryption instructions have a huge target painted 
on them.)

Of course, Keccak itself, in this mode, would have access to the real key.  
However, it would at least for now be implemented in software, and it's 
designed to be implementable without exposing side-channel attacks.

There are two questions that need to be looked at:

1.  Is AES used with (essentially) random round keys secure?  At what level of 
security?  One would think so, but this needs to be looked at carefully.
2.  Is the performance acceptable?

BTW, some of the other SHA-3 proposals use the AES round transformation as a 
primitive, so could also potentially be used in generating a secure round key 
schedule.  That might (or might not) put security-critical information back 
into the hardware instructions.

If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
implementation (the inner transformation that is the basis of each Keeccak 
round) at some point.  Could that be used in a way that doesn't give it the 
ability to leak critical information?
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 12:45 PM, Ray Dillinger b...@sonic.net wrote:
 Can we do anything ...[to make it possible to remove old algorithms]? If the 
 protocol allows correction (particularly remote or automated correction) of 
 an entity using a weak crypto primitive, that opens up a whole new set of 
 attacks on strong primitives.
 
 We'd like the answer to be that people will decline to communicate with you 
 if you use a weak system,  but honestly when was the last time you had that 
 degree of choice in from whom you get exactly the content and services you 
 need?
 
 Can we even make renegotiating the cipher suite inconveniently long or heavy 
 so defaulting weak becomes progressively more costly as more people default 
 strong? That opens up denial of service attacks, and besides it makes it 
 painful to be the first to default strong.
 
 Can a check for a revoked signature for the cipher's security help? That 
 makes the CA into a point of control.
 
 Anybody got a practical idea?
I don't see how there can be any solution to this.  Slow renegotiation doesn't 
affect users until it gets to the point where they feel the something is 
broken; at that point, the result to them is indistinguishable from just 
refusing connections with the old suites.  And of course what's broken is never 
*their* software, it's the other guy's - and given the alternative, they'll go 
to someone who isn't as insistent that their potential customers do it the 
right way.  So you'll just set off a race to the bottom.

Revoking signatures ... well, just how effect are bad signature warnings 
today?  People learn - in fact, are often *taught* - to click through them.  If 
software refuses to let them do that, they'll look for other software.

Ultimately, I think you have to look at this as an economic issue.  The only 
reason to change your software is if the cost of changing is lower than the 
estimated future cost of *not* changing.  Most users (rightly) estimate that 
the chance of them losing much is very low.  You can change that estimate by 
imposing a cost on them, but in a world of competitive suppliers (and consumer 
protection laws) that's usually not practical.

It's actually interesting to consider the single counter-example out there;  
The iOS world (and to a slightly less degree, the OSX world).  Apple doesn't 
force iOS users to upgrade their existing hardware (and sometimes it's 
obsolete and isn't software-upgradeable) but in fact iOS users upgrade very 
quickly.  (iOS 7 exceeded 50% of installations within 7 days - a faster ramp 
than iOS 6.  Based on past patterns, iOS 7 will be in the high 90's in a fairly 
short time.)  No other software comes anywhere close to that.  Moving from iOS 
6 to iOS 7 is immensely more disruptive than moving to a new browser version 
(say) that drops support for a vulnerable encryption algorithm.  And yet huge 
numbers of people do it.  Clearly it's because of the new things in iOS 7 - and 
yet Microsoft still has a huge population of users on XP.

I think the real take-away here is that getting upgrades into the field is a 
technical problem only at the margins.  It has to do with people's attitudes in 
subtle ways that Apple has captured and others have not.  (Unanswerable 
question:  If the handset makers and the Telco vendors didn't make it so hard - 
often impossible - to upgrade, what would the market penetration numbers for 
different Android versions look like?)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 6:04 PM, Philipp Gühring p...@futureware.at wrote:
 it makes no sense for a hash function:  If the attacker can specify
 something about the input, he ... knows something about the input!  
 Yes, but since it's standardized, it's public knowledge, and just knowing
 the padding does not give you any other knowledge about the rest.
You're assuming what the argument is claiming to prove.

 What might be though is that Keccak could have some hidden internal
 backdoor function (which I think is very unlikely given what I read about
 it until now, I am just speaking hypothetically) that reduces the
 effective output size, if and only if the input has certain bits at the end.
Well, sure, such a thing *might* exist, though there's no (publicly) known 
technique for embedding such a thing in the kind of combinatorial mixing 
permutation that's at the base of Keccak and pretty much every hash function 
and block encryption function since DES - though the basic idea goes back to 
Shannon in the 1940's.

I will say that the Keccak analysis shows both the strength and the weakness of 
the current (public) state of the art.  Before differential cryptography, 
pretty much everything in this area was guesswork.  In the last 30-40 years 
(depending on whether you want to start with IBM's unpublished knowledge of the 
technique going back, according to Coppersmith, to 1974, or from Biham and 
Shamir's rediscovery and publication in the late 1980's), the basic idea has 
been expanded to a variety of related attacks, with very sophisticated modeling 
of exactly what you can expect to get from attacks under different 
circumstances.  The Keccak analysis goes through a whole bunch of these.  They 
make a pretty convincing argument that (a) no known attack can get anything 
much out of Keccak; (b) it's unlikely that there's an attack along the same 
general lines as currently know attacks that will work against it either.

The problem - and it's an open problem for the whole field - is that none of 
this gets at the question of whether there is some completely different kind of 
attack that would slice right through Keccak or AES or any particular 
algorithm, or any particular class of algorithms.  If you compare the situation 
to that in asymmetric crypto, our asymmetric algorithms are based on clean, 
simple mathematical structures about which we can prove a great deal, but that 
have buried within them particular problems that we believe, on fairly strong 
if hardly completely dispositive evidence, are hard.  For symmetric algorithms, 
we pretty much *rely* on the lack of any simple mathematical structure - which, 
in a Kolmogorov-complexity-style argument, just means there appear to be no 
short descriptions in tractable terms of what these transformations do.  For 
example, if you write the transformations down as Boolean formulas in CNF or 
DNF, the results are extremely large, with irregular, highly inter-twined 
terms.  Without that, various Boolean solvers would quickly cut them to ribbons.

In some sense, DC and related techniques say OK, the complexity of the 
function itself is high, but if I look at the differentials, I can find some 
patterns that are simple enough to work with.

If there's an attack, it's likely to be based on something other than Boolean 
formulas written out in any form we currently work with, or anything based on 
differentials.  It's likely to come out of a representation entirely different 
from anything anyone has thought of.  You'd need that to do key recovery; you'd 
also need it to embed a back door (like a sensitivity to certain input 
patterns).  The fact that no one has found such a thing (publicly, at least) 
doesn't mean it can't exist; we just don't know what we don't know.  Surprising 
results like this have appeared before; in a sense, all of mathematics is about 
finding simple, tractable representations that turn impossible problems into 
soluble ones.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] System level security in low end environments

2013-10-06 Thread Jerry Leichter
On Oct 5, 2013, at 2:00 PM, John Gilmore wrote:
 b.  There are low-end environments where performance really does
 matter.  Those often have rather different properties than other
 environments--for example, RAM or ROM (for program code and S-boxes)
 may be at a premium.
 
 Such environments are getting very rare these days.  For example, an
 electrical engineer friend of mine was recently working on designing a
 cheap aimable mirror, to be deployed by the thousands to aim sunlight
 at a collector.  He discovered that connectors and wires are more
 expensive than processor chips these days!...
He had a big advantage:  He had access to power, since the system has to run 
the motors (which probably require an order of magnitude or more power than all 
his electronics).  These days, the limits on many devices are expressible in 
watts/compute for some measure of compute.  But less often noticed (because 
most designers are handed a fixed device and have to run with it) dynamic RAM 
also draws power, even if you aren't doing any computation.  (Apple, at least, 
appears to have become very aware of this early on and designs their phones to 
make do with what most people would consider to be very small amounts of RAM - 
though perhaps not those of us who grew up in 16-bit-processor days. :-)  Some 
other makers didn't include this consideration in their designs, built with 
larger RAM's - sometimes even advertising that as a plus - and paid for it in 
reduced battery life.)

A couple of years back, I listened to a talk about where the next generation of 
wireless communications will be.  Historically, every so many years, we move up 
a decade (as in factor of 10) in the frequencies we use to build our 
communications devices.  We're approaching the final such move, to the edge of 
the TeraHertz range.  (Another factor of 10 gets you to stuff that's more like 
infrared than radio - useful, but in very different ways.)  What's of course 
great about moving to higher frequencies is that you get much more bandwidth - 
there's 10 times as much bandwidth from 10GHz to 100GHz as there was from DC up 
to 10GHz.  And the power required to transmit at a given bit rate goes down 
with the bandwidth, and further since near-THz radiation is highly directional 
you're not spewing it out over a sphere - it goes pretty much only where it's 
needed.  So devices operating in the near-THz range will require really tiny 
amounts of power.  Also, they will be very small, as the 
 wavelengths are comparable to the size of a chip.  In fact, the talk showed 
pictures of classic antenna geometries - dipoles, Yagi's - etched directly onto 
chips.

Near-THz frequencies are highly directional, so you need active tracking - but 
the computes to do that can go on chip along with the antennas they control.  
You'd guess (at least I did until I learned better) that such signals don't 
travel far, but in fact you have choices there:  There are bands in which air 
absorption is high, which is ideal for, say, a WiFi replacement (which would 
have some degree of inherent security as the signal would die off very 
rapidly).  There are other bands that have quite low air absorption.  None of 
these frequencies are likely to propagate far through many common building 
materials, however.  So we're looking at designs with tiny, extremely low 
powered, active repeaters all over the place.  (I visualize a little device you 
stick on a window that uses solar power to communicate with a box on a pole 
outside, and then internally to similar scattered devices to fill your house 
with an extremely high speed Internet connection.)

The talk I heard was from a university group doing engineering 
characterization - i.e., this stuff was out of the lab and at the point where 
you could construct samples easily; the job now was to come up with all the 
design rules and tradeoff tables and simulation techniques that you need before 
you can build commercial products.  They thought this might be 5G telephone 
technology.  Expect to see the first glimmers in, say, 5 years.

Anyway, this is (a) a confirmation of your point that computational elements 
are now so cheap that components like wires are worth replacing; but (b) unlike 
the case with the mirror controllers, we'll want to build these things in large 
numbers and scatter them all over the place, so they will have to make do with 
very small amounts of power.  (For the first inkling of what this is like, 
think of RFID chips - already out there in the billions.)

So, no, I don't think you can assume that efficiency considerations will go 
away.  If you want pervasive security in your pervasive compute architecture, 
you're going to have to figure out how make it work when many of the nodes in 
your architecture are tiny and can't afford to drain power run complicated 
algorithms.
-- Jerry

___
The cryptography 

Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. 
If you're going to choose a single standard cryptographic algorithm, you have 
to consider all the places it will be used.  These range from very busy front 
ends - where people to this day complain (perhaps with little justification, 
but they believe that for them it's a problem) that doing an RSA operation per 
incoming connection is too expensive (and larger keys will only make it worse), 
to phones (where power requirements are more of an issue than raw speed) to 
various embedded devices (where people still use cheap devices because every 
penny counts) to all kinds of devices that have to run for a long time off a 
local battery or even off of microwatts of power transferred to an unpowered 
device by a reader.
 
 We have machines that are fast enough now that while speed isn't a non issue, 
 it is no longer nearly as important as the process is giving it precedence 
 for.
We do have such machines, but we also have - and will have, for the foreseeable 
future - machines for which this is *not* the case.

Deciding on where to draw the line and say I don't care if you can support 
this algorithm in a sensor designed to be put in a capsule and swallowed to 
transmit pictures of the GI tract for medical analysis is not a scientific 
question; it's a policy question.

 Our biggest problem now is security,  not speed. I believe that it's a bit 
 silly to aim for a minimum acceptable security achievable within the context 
 of speed while experience shows that each new class of attacks is usually 
 first seen against some limited form of the cipher or found to be effective 
 only if the cipher is not carried out to a longer process.  
The only problem with this argument is that the biggest problem is hard to 
pin down.  There's little evidence that the symmetric algorithms we have today 
are significant problems.  There is some evidence that some of the asymmetric 
algorithms may have problems due to key size, or deliberate subversion.  Fixing 
the first of these does induce significant costs; fixing the second first of 
all requires some knowledge of the nature of the subversion.  But beyond all 
this the biggest problems we've seen have to do with other components, like 
random number generators, protocols, infiltration of trusted systems, and so 
on.  None of these is amenable to defense by removing constraints on 
performance.  (The standardized random number generator that ignored 
performance to be really secure turned out to be anything but!)

We're actually moving in an interesting direction.  At one time, the cost of 
decent crypto algorithms was high enough to be an issue for most hardware.  DES 
at the speed of the original 10Mb/sec Ethernet was an significant engineering 
accomplishment!  These days, even the lowest end traditional computer has 
plenty of spare CPU to run even fairly expensive algorithms - but at the same 
time we're pushing more and more into a world of tiny, low-powered machines 
everywhere.  The ratio of speed and supportable power consumption and memory 
between the average large machine and the average tiny machine is wider 
than it's ever been.  At the low end, the exposure is different:  An attacker 
typically has to be physically close to even talk to the device, there are only 
a small number of communications partners, and any given device has relatively 
little information within it.  Perhaps a lower level of security is appropriate 
in such situations.  (Of course, as we've seen with SCA
 DA systems, there's a temptation to just put these things directly on the 
Internet - in which case all these assumptions fail.  A higher-level problem:  
If you take this approach, you need to make sure that the devices are only 
accessible through a gateway with sufficient power to run stronger algorithms.  
How do you do that?)

So perhaps the assumption that needs to be reconsidered is that we can design a 
single algorithm suitable across the entire spectrum.  Currently we have 
SHA-128 and SHA-256, but exactly why one should choose one or the other has 
never been clear - SHA-256 is somewhat more expensive, but I can't think of any 
examples where SHA-128 would be practical but SHA-256 would not.  In practice, 
when CPU is thought to be an issue (rightly or wrongly), people have gone with 
RC4 - standards be damned.

It is worth noting that NSA seems to produce suites of algorithms optimized for 
particular uses and targeted for different levels of security.  Maybe it's time 
for a similar approach in public standards.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-05 Thread Jerry Leichter
On Oct 3, 2013, at 7:33 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 XML was not intended to be easy to read, it was designed to be less painful 
 to work with than SGML, that is all
More to the point, it was designed to be a *markup* format.  The markup is 
metadata describing various semantic attributes of the data.  If you mark up a 
document, typically almost all the bytes are data, not metadata!

TeX and the x-roff's are markup formats, though at a low level.  LaTeX moves 
to a higher level.  The markup commands in TeX or sroff or LaTeX documents 
are typically a couple of percent of the entire file.  You can typically read 
the content, simply ignoring the markup, with little trouble.  In fact, there 
are programs around at least for TeX and LaTeX that strip out all the markup so 
that you can read the content as just plain text.  You can typically get the 
gist with little trouble.

If you look at what XML actually ended up being used for, in many cases nearly 
the entire damn document is ... markup!  The data being marked up becomes 
essentially vestigial.  Strip out the XML and nothing is left.  In and of 
itself, there may be nothing wrong with this.  But it's why I object to the use 
of markup language to describe many contemporary uses of XML.  It leads you 
to think you're getting something very different from what you actually do get. 
 (The XML world has a habit of using words in unexpected ways.  I had the 
damnedest time understanding much of the writing emanating from this world 
until I realized that when the XML world say semantics, you should read it as 
syntax.  That key unlocks many otherwise-mysterious statements.)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-05 Thread Jerry Leichter
On Oct 4, 2013, at 12:20 PM, Ray Dillinger wrote:
 So, it seems that instead of AES256(key) the cipher in practice should be
 AES256(SHA256(key)).
 
 Is it not the case that (assuming SHA256 is not broken) this defines a cipher
 effectively immune to the related-key attack?

Yes, but think about how you would fit it into the question I raised:

- If this is the primitive black box that does a single block
  encryption, you've about doubled the cost and you've got this
  messy combined thing you probably won't want to call a primitive.
- If you say well, I'll take the overall key and replace it by
  its hash, you're defining a (probably good) protocol.  But
  once you're defining a protocol, you might as well just specify
  random keys and forget about the hash.

Pinning down where the primitive ends and the protocol is tricky and ultimately 
of little value.  The takeaway is that crypto algorithms have to be used with 
caution.  Even a perfect block cipher, if used in the most obvious way (ECB 
mode), reveals when it has been given identical inputs.  Which is why it's been 
argued that any encryption primitive (at some level) has to be probabilistic, 
so that identical inputs don't produce identical outputs.  (Note that this 
implies that output must always be larger then the input!) Still, we have 
attainable models in which no semantic information about the input leaks (given 
random keys).  Related key attacks rely on a different model which has nothing 
much to do with practical usage but are obvious from a purely theoretical point 
of view:  OK, we've insulated ourselves from attacks via the plaintext input, 
how about the key?

More broadly there are plenty of attacks (probably including most of the 
related key attacks; I haven't looked closely enough to be sure) that are based 
on weaknesses in key scheduling.  If you're going to make a cryptographic hash 
function a fundamental part of your block cipher, why not use it to generate 
round keys?  The only reason I know of - and in practical terms it's not a 
trivial one - is the substantial performance hit.

-- Jerry



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 5, 2013, at 11:54 AM, radi...@gmail.com wrote:
 Jerry Leichter wrote:
 Currently we have SHA-128 and SHA-256, but exactly why one should choose 
 one or the other has never been clear - SHA-256 is somewhat more 
 expensive, but I can't think of any examples where SHA-128 would be 
 practical but SHA-256 would not.  In practice, when CPU is thought to be an 
 issue (rightly or wrongly), people have gone with RC4 - standards be 
 damned.
 
 SHA-224/256 (there is no SHA-128) use 32-bit words, SHA-384/512 uses 64-bit 
 words. That difference is indeed a very big deal in embedded device 
 applications. SHA-3 uses only 64-bit words, which will likely preclude it 
 being used in most embedded devices for the foreseeable future. 
Oops - acronym confusion between brain and keyboard.  I meant to talk about 
AES-128 and AES-256.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-03 Thread Jerry Leichter
On Oct 3, 2013, at 10:09 AM, Brian Gladman b...@gladman.plus.com wrote:
 Leaving aside the question of whether anyone weakened it, is it
 true that AES-256 provides comparable security to AES-128?
 
 I may be wrong about this, but if you are talking about the theoretical
 strength of AES-256, then I am not aware of any attacks against it that
 come even remotely close to reducing its effective key length to 128
 bits.  So my answer would be 'no'.
There are *related key* attacks against full AES-192 and AES-256 with 
complexity  2^119.  http://eprint.iacr.org/2009/374 reports on improved 
versions of these attacks against *reduced round variants of AES-256; for a 
10-round variant of AES-256 (the same number of rounds as AES-128), the attacks 
have complexity 2^45 (under a strong related sub-key attack).

None of these attacks gain any advantage when applied to AES-128.

As *practical attacks today*, these are of no interest - related key attacks 
only apply in rather unrealistic scenarios, even a 2^119 strength is way beyond 
any realistic attack, and no one would use a reduced-round version of AES-256.

As a *theoretical checkpoint on the strength of AES* ... the abstract says the 
results raise[s] serious concern about the remaining safety margin offered by 
the AES family of cryptosystems.

The contact author on this paper, BTW, is Adi Shamir.

 But, having said that, I consider the use of AES-256 in place of AES-128
 to be driven more by marketing hype than by reality.  The theoreticaal
 strength of modern cryptographic algorithms is the least of our worries
 in producing practical secure systems.
100% agreement.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 12:27 PM, Dirk-Willem van Gulik wrote:
 It's clear what 10x stronger than needed means for a support beam:  We're 
 pretty good at modeling the forces on a beam and we know how strong beams of 
 given sizes are.  
 Actually - do we ? I picked this example as it is one of those where this 'we 
 know' falls apart on closer examination. Wood varies a lot; and our ratings 
 are very rough. We drill holes through it; use hugely varying ways to 
 glue/weld/etc. And we liberally apply safety factors everywhere; and a lot of 
 'otherwise it does not feel right' throughout. And in all fairness - while 
 you can get a bunch of engineers to agree that 'it is strong enough' - they'd 
 argue endlessly and have 'it depends' sort of answers when you ask them how 
 strong is it 'really' ?
[Getting away from crypto, but ... ]  Having recently had significant work done 
on my house, I've seen this kind of thing close up.

There are three levels of construction.  If you're putting together a small 
garden shed, it looks right is generally enough - at least if it's someone 
with sufficient experience.  If you're talking non-load-bearing walls, or even 
some that bear fairly small loads, you follow standards - use 2x4's, space them 
36 apart, use doubled 2x4's over openings like windows and doors, don't cut 
holes larger than some limit - and you'll be fine (based on what I saw, you 
could cut a hole large enough for a water supply, but not for a water drain 
pipe).  Methods of attachment are also specified.  These standards - enforced 
by building codes - are deliberately chosen with large safety margins so that 
you don't need to do any detailed calculations.  They are inherently safe over 
some broad range of sizes of a constructed object.

Beyond that, you get into the realm of computation.  I needed a long open span, 
which was accomplished with an LV beam (engineered wood - LV is Layered 
Veneer).  The beam was supporting a good piece of the house's roof, so the 
actual forces needed to be calculated.  LV beams come in multiple sizes, and 
the strengths are well characterized.  In this case, we would not have wanted 
the architect/structural engineer to just build in a larger margin of safety:  
There was limited space in the attic to get this into place, and if we chose 
too large an LV beam just for good measure, it wouldn't fit.  Alternatively, 
we could have added a vertical support beam just to be sure - but it would 
have disrupted the kitchen.  (A larger LV beam would also have cost more money, 
though with only one beam, the percentage it would have added to the total cost 
would have been small.  On a larger project - or, if we'd had to go with a 
steel beam if no LV beam of appropriate size and strength exi
 sted - the cost increase could have been significant.)

The larger the construction project, the tighter the limits on this stuff.  I 
used to work with a former structural engineer, and he repeated some of the 
bad example stories they are taught.  A famous case a number of years back 
involved a hotel in, I believe, Kansas City.  The hotel had a large, open 
atrium, with two levels of concrete skyways for walking above.  The skyways 
were hung from the roof.  As the structural engineer specified their 
attachment, a long threaded steel rod ran from the roof, through one skyway - 
with the skyway held on by a nut - and then down to the second skyway, also 
held on by a nut.  The builder, realizing that he would have to thread the nut 
for the upper skyway up many feet of rod, made a minor change:  He instead 
used two threaded rods, one from roof to upper skyway, one from upper skyway to 
lower skyway.  It's all the same, right?  Well, no:  In the original design, 
the upper nut holds the weight of just the upper skyway.  In the modi
 fied version, it holds the weight of *both* skyways.  The upper fastening 
failed, the structure collapsed, and as I recall several people on the skyways 
at the time were killed.  So ... not even a factor of two safety margin there.  
(The take-away from the story as delivered to future structural engineers was 
*not* that there wasn't a large enough safety margin - the calculations were 
accurate and well within the margins used in building such structures.  The 
issue was that no one checked that the structure was actually built as 
designed.)

I'll leave it to others to decide whether, and how, these lessons apply to 
crypto design.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Passwords

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 5:10 PM, Jeffrey Schiller wrote:
 A friend of mine who used to build submarines once told me that the first 
 time the sub is submerged, the folks who built it are on board. :-)
Indeed.  A friend served on nuclear subs; I heard about that practice from him. 
 (The same practice is followed after any significant refit.)  It inspired my 
suggestion.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 5:58 PM, Peter Fairbrother wrote:
 [and why doesn't AES-256 have 256-bit blocks???]
Because there's no security advantage, but a practical disadvantage.

When blocks are small enough, the birthday paradox may imply repeated blocks 
after too short a time to be comfortable.  Whether this matters to you actually 
depends on how you use the cipher.  If you're using CBC, for example, you don't 
want to ever see a repeated block used with a single key.  With 64-bit blocks 
(as in DES), you expect to see a repetition after 2^32 blocks or 2^38 bytes, 
which in a modern network is something that might actually come up.

A 128-bit block won't see a collision for 2^64 blocks or 2^71 bytes, which is 
unlikely to be an issue any time in the foreseeable future.

Note that many other modes are immune to this particular issue.  For example, 
CTR mode with a 64-bit block won't repeat until you've used it for 2^64 blocks 
(though you would probably want to rekey earlier just to be safe).

I know of no other vulnerability that are related to the block size, though 
they may be out there; I'd love to learn about them.

On the other hand, using different block sizes keeps you from easily 
substituting one cipher for another.  Interchanging AES-128 and AES-256 - or 
substituting in some entirely different cipher with the same block size - is 
straightforward.  (The changed key length can be painful, but since keys are 
fairly small anyway you can just reserve key space large enough for any cipher 
you might be interested int.)  Changing the block size affects much more code 
and may require changes to the protocol (e.g., you might need to reserve more 
bits to represent the length of a short final block).

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-02 Thread Jerry Leichter
On Oct 2, 2013, at 10:46 AM, Viktor Dukhovni cryptogra...@dukhovni.org wrote:
 Text encodings are easy to read but very difficult to specify
 boundaries in without ambiguity.
 
 Yes, and not just boundaries.
Always keep in mind - when you argue for easy readability - that one of 
COBOL's design goals was for programs to be readable and understandable by 
non-programmers.  (There's an *immense* amount of history and sociology and 
assumptions about how businesses should be managed hidden under that goal.  One 
could write a large article, and probably a book, starting from there.)

My favorite more recent example of the pitfalls is TL1, a language and protocol 
used to managed high-end telecom equipment.  TL1 has a completely rigorous 
syntax definition, but is supposed to be readable.  This leads to such 
wonderful features as that SPACE is syntactically significant, and SPACE SPACE 
sometimes means something different from just SPACE.  I have no idea if TL1 
messages have a well-defined canonical form.  I doubt it.

Correct TL1 parsers are complicated and if you need one it's generally best to 
bite the bullet and pay to buy one from an established vendor.   Alternatively, 
you can go to 
http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCHDOCUMENT=GR-831;
 and pay $728 for a document that appears to be less than 50 pages long.  Oh, 
and you may wish to refer to 6 other documents available at similarly 
reasonable prices.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread Jerry Leichter
On Sep 30, 2013, at 8:10 PM, James A. Donald wrote:
 We have a complie to generate C code from ASN.1 code
 
 Google has a compiler to generate C code from protobufs source
 
 The ASN.1 compiler is open source.  Google's compiler is not.
http://code.google.com/p/protobuf/source/checkout.  BSD 3-clause license.

 Further, google is unhappy that too-clever-code gives too-clever programmers 
 too much power, and has prohibited its employees from ever doing something 
 like protobufs again.
I have no idea what this means.  I can tell you with certainty that all kinds 
of clever code is being developed and deployed within Google (though I can't 
give you any details, of course).  Some of it may eventually get described in 
the open literature; some of it may be open-sourced.

Personally, I have no deep objections to ASN.1 - though much of its complexity 
has been proved unnecessary by the usage of other, much simpler, approaches, 
like protobufs.  Still, once you have compiler for it, it doesn't really matter 
all that much either way.  For crypto algorithms, you're unlikely to want or 
need the more obscure corners.

Do keep in mind, however, that having just a way to generate C from ASN.1 no 
longer covers much of the necessary space, as there are now many popular 
languages that are no C-like.  Some are easy to bind to C libraries (e.g., 
Python); others are much more difficult (e.g., Java).  For ease of use, you 
really want generators that produce code well integrated into the target 
language, no some barely functional glued-together thing.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-10-01 Thread Jerry Leichter
On Sep 30, 2013, at 9:01 PM, d.nix d@comcast.net wrote:
 It's also worth pointing out that common browser ad blocking / script
 blocking / and site redirection add-on's and plugins (NoScript,
 AdBlockPlus, Ghostery, etc...) can interfere with the identification
 image display. My bank uses this sort of technology and it took me a
 while to identify exactly which plug-in was blocking the security
 image and then time to sort out an exception rule to not block it.
 
 The point being - end users *will* install plug-ins and extensions
 that may interfere with your verification tools.
It goes beyond that.  A company named Iovation sells a service that's supposed 
to check a fingerprint of your machine against a database so that someone like 
a bank can determine if your login is supposed to come from this machine.  (It 
also leaves behind a cookie, and may blacklist some addresses).  Anyway, the 
result is a connection to iesnare.something when you go to your bank.  I run 
a Little Snitch on my Mac's; it reports and ask for approval for unknown 
connections.  So I see alerts pop up when I go to my bank and similar sites.  
Sometimes I block the connection, sometimes I let it through.  (Actually, it 
doesn't seem to affect the site's behavior either way.)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Jerry Leichter
On Oct 1, 2013, at 3:29 AM, Dirk-Willem van Gulik di...@webweaving.org wrote:
 ...I do note that in crypto (possibly driven by the perceived expense of too 
 many bits) we tend to very carefully observe the various bit lengths found in 
 800-78-3, 800-131A , etc etc. And rarely go much beyond it*.
 
 While in a lot of other fields - it is very common for 'run of the mill' 
 constructions; such as when calculating a floor, wooden support beam, a 
 joist, to take the various standards and liberally apply safety factors. A 
 factor 10 or 20x too strong is quite common *especially* in 'consumer' 
 constructions  
It's clear what 10x stronger than needed means for a support beam:  We're 
pretty good at modeling the forces on a beam and we know how strong beams of 
given sizes are.  We have *no* models for the strength of a crypto system 
that would allow one to meaningfully make such comparisons in general.  It's 
like asking that houses be constructed to survive intact even when hit by the 
Enterprise's tractor beam.

Oh, if you're talking brute force, sure, 129 bits takes twice as long as 128 
bits.  But even attacking a 128-bit cipher by brute force is way beyond 
anything we can even sketch today, and 256 bits is getting into if you could 
use the whole known universe as a computer it would talk you more than the life 
of the universe territory.

If, on the other hand, you're talking analytic attacks, there's no way to know 
ahead of time what matters.  The ultimate example of this occurred back when 
brute force attacks against DES, at 56 bits, were clearly on the horizon - so 
people proposed throwing away the key schedule and making the key the full 
expanded schedule of 448 bits, or whatever it came to.  Many times more secure 
- except then differential cryptography was (re-)discovered and it turned out 
that 448-bit DES was no stronger than 56-bit DES.

There are three places I can think of where the notion of adding a safety 
factor makes sense today; perhaps someone can add to the list, but I doubt it 
will grow significantly longer:

1.  Adding a bit to the key size when that key size is small enough;
2.  Using multiple encryption with different mechanisms and independent keys;
3.  Adding rounds to a round-based symmetric encryptor of the design we 
currently use pretty universally (multiple S and P transforms with some keying 
information mixed in per round, repeated for multiple rounds).  In a good 
cipher designed according to our best practices today, the best attacks we know 
of extend to some number of rounds and then just die - i.e., after some number 
of rounds they do no better than brute force.  Adding a few more beyond that 
makes sense.  But ... if you think adding many more beyond that makes sense, 
you're into tin-foil hat territory.  We understand what certain attacks look 
like and we understand how they (fail to) extend beyond some number of rounds - 
but the next attack down the pike, about which we have no theory, might not be 
sensitive to the number of rounds at all.

These arguments apply to some other primitives as well, particularly hash 
functions.  They *don't* apply to asymmetric cryptography, except perhaps for 
case 2 above - though it may not be so easy to apply.  For asymmetric crypto, 
the attacks are all algorithmic and mathematical in nature, and the game is 
different.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread Jerry Leichter
On Oct 1, 2013, at 12:28 PM, James A. Donald jam...@echeque.com wrote:
 Further, google is unhappy that too-clever-code gives too-clever 
 programmers too much power, and has prohibited its employees from ever 
 doing something like protobufs again.
 Got any documentation for this assertion?
 The google style guide prohibits too-clever code.  protobufs and gmock is 
 too-clever code.
To be blunt, you have no idea what you're talking about.

I worked at Google until a short time ago; Ben Laurie still does.  Both of us 
have written, submitted, and reviewed substantial amounts of code in the Google 
code base.  Do you really want to continue to argue with us about what the 
Google Style Guide is actually understood within Google?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Passwords

2013-10-01 Thread Jerry Leichter
On Oct 1, 2013, at 4:13 PM, Peter Fairbrother wrote:
 And as to passwords being near end-of-life? Rubbish. Keep the password 
 database secure, give the user a username and only three password attempts, 
 and all your GPUs and ASIC farms are worth nothing.
Yup.

I've (half-)jokingly suggested that any business maintaining a database of 
usernames and passwords must, by law, include within that database, under a set 
of fixed fake user names using exactly the same format and algorithms as is 
used for all other user accounts, such things as (a) the business's bank 
account data, including account numbers and full authentication information; 
(b) similar information about the top executives in the company and everyone on 
the management chain who has any responsibility for the database.  Once that 
information is in the database, the business can protect it or not, as they 
wish.  Let them sink or swim along with their users.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Jerry Leichter
On Sep 30, 2013, at 4:16 AM, ianG i...@iang.org wrote:
 I'm not really understanding the need for checksums on keys.  I can sort of 
 see the battlefield requirement that comms equipment that is stolen can't 
 then be utilized in either a direct sense (listening in) or re-sold to some 
 other theater.
I'm *guessing* that this is what checksums are for, but I don't actually 
*know*.  (People used to wonder why NSA asked that DES keys be checksummed - 
the original IBM Lucifer algorithm used a full 64-bit key, while DES required 
parity bits on each byte.  On the one hand, this decreased the key size from 64 
to 56 bits; on the other, it turns out that under differential crypto attack, 
DES only provides about 56 bits of security anyway.  NSA, based on what we saw 
in the Clipper chip, seems to like running crypto algorithms tight:  Just as 
much effective security as the key size implies, exactly enough rounds to 
attain it, etc.  So *maybe* that was why they asked for 56-bit keys.  Or maybe 
they wanted to make brute force attacks easier for themselves.)

 But it still doesn't quite work.  It seems antithetical to NSA's obsession 
 with security at Suite A levels, if they are worried about the gear being 
 snatched, they shouldn't have secret algorithms in them at all.
This reminds me of the signature line someone used for years:  A boat in a 
harbor is safe, but that's not what boats are for.  In some cases you need to 
communicate securely with someone who's in harm's way, so any security device 
you give him is also in harm's way.  This is hardly a new problem.  Back in 
WW I, code books on ships had lead covers and anyone who had access to them had 
an obligation to see they were tossed overboard if the ship was about to fall 
into enemy hands.  Attackers tried very hard to get to the code book before it 
could be tossed.

Embassies need to be able to communicate at very high levels of security.  They 
are normally considered quite secure, but quiet attacks against them do occur.  
(There are some interesting stories of such things in Peter Wright's 
Spycatcher, which tells the story of his career in MI5.  If you haven't read it 
- get a copy right now.)  And of course people always look at the seizure of 
the US embassy in Iran.  I don't know if any crypto equipment was compromised, 
but it has been reported that the Iranians were able, by dint of a huge amount 
of manual labor, to piece back together shredded documents.  (This lead to an 
upgrade of shredders not just by the State Department but in the market at 
large, which came to demand cross-cut shredders, which cut the paper into 
longitudinal strips, but then cut across the strips to produce pieces no more 
than an inch or so long.  Those probably could be re-assembled using 
computerized techniques - originally developed to re-assemble old parchm
 ents like the Dead Sea Scrolls.)

Today, there are multiple layers of protection.  The equipment is designed to 
zero out any embedded keys if tampered with.  (This is common even in the 
commercial market for things like ATM's.)  A variety of techniques are used to 
make it hard to reverse-engineer the equipment.  (In fact, even FIPS 
certification of hardware requires some of these measures.)  At the extreme, 
operators of equipment are supposed to destroy it to prevent its capture.  
(There was a case a number of years back of a military plane that was forced by 
mechanical trouble to land in China.  A big question was how much of the 
equipment had been destroyed.  There are similar cases even today with ships, 
in which people on board take axes to the equipment.)

 Using checksums also doesn't make sense, as once the checksum algorithm is 
 recovered, the protection is dead.
The hardware is considered hard to break into, and one hopes it's usually 
destroyed.  The military, and apparently the NSA, believes in defense in depth. 
 If someone manages to get the checksum algorithm out, the probably have the 
crypto algorithm, too.

 I would have thought a HMAC approach would be better, but this then brings in 
 the need for a centralised key distro approach.
Why HMAC?  If you mean a keyed MAC ... it's not better.   But a true signature 
would mean that even completely breaking a captured device doesn't help you 
generate valid keys.  (Of course, you can modify the device - or a cloned copy 
- to skip the key signature check - hence my question as to whether one could 
create a crypto system that *inherently* had the properties that signed keys 
naively provide.)

  Ok, so that is typically how battlefield codes work -- one set for everyone 
 -- but I would have thought they'd have moved on from the delivery SPOF by 
 now.
In a hierarchical organization, centralized means of control are considered 
important.  There was an analysis of the (bad) cryptography in secure radios 
for police and fire departments, and it mainly relied on distribution of keys 
from a central source.

 ...It also seems a 

Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread Jerry Leichter
On Sep 28, 2013, at 3:06 PM, ianG wrote:
 Problem with the NSA is that its Jekyll and Hyde. There is the good side
 trying to improve security and the dark side trying to break it. Which
 side did the push for EC come from?
 What's in Suite A?  Will probably illuminate that question...
The actual algorithms are classified, and about all that's leaked about them, 
as far as I can determine in a quick search, is the names of some of them, and 
general properties of a subset of those - e.g., according to Wikipedia, BATON 
is a block cipher with a key length of 320 bits (160 of them checksum bits - 
I'd guess that this is an overt way for NSA to control who can use stolen 
equipment, as it will presumably refuse to operate at all with an invalid key). 
 It looks as if much of this kind of information comes from public descriptions 
of equipment sold to the government that implements these algorithms, though a 
bit of the information (in particular, the name BATON and its key and block 
sizes) has made it into published standards via algorithm specifiers.  cryptome 
has a few leaked documents as well - again, one showing BATON mentioned in 
Congressional testimony about Clipper.

Cryptographic challenge:  If you have a sealed, tamper-proof box that 
implements, say, BATON, you can easily have it refuse to work if the key 
presented doesn't checksum correctly.  In fact, you'd likely have it destroy 
itself if presented with too many invalid keys.  NSA has always been really big 
about using such sealed modules for their own algorithms.  (The FIPS specs were 
clearly drafted by people who think in these terms.  If you're looking at them 
while trying to get software certified, many of the provisions look very 
peculiar.  OK, no one expects your software to be potted in epoxy (opaque in 
the ultraviolet - or was it infrared?); but they do expect various kinds of 
isolation that just affect the blocks on a picture of your software's 
implementation; they have no meaningful effect on security, which unlike 
hardware can't enforce any boundaries between the blocks.)

Anyway, this approach obviously depends on the ability of the hardware to 
resist attacks.  Can one design an algorithm which is inherently secure against 
such attacks?  For example, can one design an algorithm that's strong when used 
with valid keys but either outright fails (e.g., produces indexes into 
something like S-boxes that are out of range) or is easily invertible if used 
with invalid keys (e.g., has a key schedule that with invalid keys produces all 
0's after a certain small number of rounds)?  You'd need something akin to 
asymmetric cryptography to prevent anyone from reverse-engineering the checksum 
algorithm from the encryption algorithm, but I know of no fundamental reason 
why that couldn't be done.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-29 Thread Jerry Leichter
On Sep 26, 2013, at 7:54 PM, Phillip Hallam-Baker wrote:
 ...[W]ho on earth thought DER encoding was necessary or anything other than 
 incredible stupidity?...
It's standard.  :-)

We've been through two rounds of standard data interchange representations:

1.  Network connections are slow, memory is limited and expensive, we can't 
afford any extra overhead.  Hence DER.
2.  Network connections are fast, memory is cheap, we don't have to worry about 
them - toss in every last feature anyone could possibly want.  Hence XML.

Starting from opposite extremes, committees of standards experts managed to 
produce results that are too complex and too difficult for anyone to get right 
- and which in cryptographic contexts manage to share the same problem of 
multiple representations that make signing such a joy.

BTW, the *idea* behind DER isn't inherently bad - but the way it ended up is 
another story.  For a comparison, look at the encodings Knuth came up with in 
the TeX world.  Both dvi and pk files are extremely compact binary 
representations - but correct encoders and decoders for them are plentiful.  
(And it's not as if the Internet world hasn't come up with complex, difficult 
encodings when the need arose - see IDNA.)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 I was thinking about this and it occurred to me that it is fairly easy to get 
 a public SSL server to provide a client with a session key - just ask to 
 start a session.
 
 Which suggests that maybe the backdoor [for an NSA-spiked random number 
 generator] is of the form that ... you get a lot of nonces [maybe just one] 
 from the random number generator ... and that allows the [next one to be 
 predicted more easily or even the] seed to be unearthed.  One simple way [to 
 stop this] would be to encrypt the nonces from the RNG under a secret key 
 generated in some other fashion. 
 
 nonce = E (R, k)
 
 Or hashing the RNG output and XORing with it 
 
 nonce = R  XOR H(R)
You shifted from random value to nonce.  Given the severe effects on 
security that using a nonce - a value that is simply never repeated in a 
given cryptographic context; it may be predictable, even fixed - to a random 
value, one needs to be careful about the language.  (There's another layer as 
well, partly captured by unpredictable value but not really:  Is it a value 
that we must plan on the adversary learning at some point, even though he 
couldn't predict it up front, or must it remain secret?  The random values in 
PFS are only effective in providing forward security if they remain secret 
forever.)

Anyway, everything you are talking about here is *supposed* to be a random 
value.  Using E(R,k) is a slightly complicated way of using a standard PRNG:  
The output of a block cipher in counter mode.  Given (a) the security of the 
encryption under standard assumptions; (b) the secrecy and randomness of k; the 
result is a good PRNG.  (In fact, this is pretty much exactly one of the 
Indistinguishability assumptions.  There are subtly different forms of those 
around, but typically the randomness of input is irrelevant - these are 
semantic security assumptions so knowing something about the input can't help 
you.)  Putting R in there can't hurt, and if the way you got R really is random 
then even if k leaks or E turns out to be weak, you're still safe.  However ... 
where does k come from?  To be able to use any of the properties of E, k itself 
must be chosen at random.  If you use the same generator as way use to find R, 
it's not clear that this is much stronger than R itself.  If 
 you have some assured way of getting a random k - why not use it for R itself? 
 (This might be worth it if you can generate a k you believe in but only at a 
much lower rate than you can generate an R directly.  Then you can stretch k 
over a number of R values.  But I'd really think long and hard about what 
you're assuming about the various components.)

BTW, one thing you *must not* do is have k and the session key relate to each 
other in any simple way.

For hash and XOR ... no standard property of any hash function tells you 
anything about the properties of R XOR H(R).  Granted, for the hash functions 
we generally use, it probably has about the same properties; but it won't have 
any more than that.  (If you look at the structure of classic iterated hashes, 
the last thing H did was compute S = S + R(S), where S was the internal state 
and R was the round function.  Since R is usually invertible, this is the only 
step that actually makes the whole thing non-invertible.  Your more-or-less 
repetition of the same operation probably neither helps more hinders.)

At least if we assume the standard properties, it's hard to get R from H(R) - 
but an attacker in a position to try a large but (to him) tractable number of 
guesses for R can readily check them all.  Using R XOR H(R) makes it no harder 
for him to try that brute force search.  I much prefer the encryption approach.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 24, 2013, at 7:53 PM, Phillip Hallam-Baker wrote:
 There are three ways a RNG can fail
 
 1) Insufficient randomness in the input
 2) Losing randomness as a result of the random transformation
 3) Leaking bits through an intentional or unintentional side channel
 
 What I was concerned about in the above was (3).
 
 I prefer the hashing approaches. While it is possible that there is a matched 
 set of weaknesses, I find that implausible.
Then I'm mystified by your proposal.

If enough bits are leaked to make it possible to feed all possible values of 
the generated value R into whatever algorithm uses them (and, of course, 
recognize when you've hit the right one), then the extra cost of instead 
replacing each such value R with R XOR H(R) is trivial.  No fixed 
transformation can help here - it's no different from using an RNG with problem 
1 and whitening its output:  It now looks strong, but isn't.  (In fact, in 
terms of black box behavior to someone who doesn't know about the limited 
randomness/internal loss/side channel, these three weaknesses are functionally 
equivalent - and are subject to exactly the same attacks.)

The encryption approach - replacing R by E(k,R) - helps exactly because the key 
it uses is unknown to the attacker.  As I said before, this approach is fine, 
but:  Where does this magic random key come from; and given that you have a way 
to generate it, why not use that way to generate R directly rather than playing 
games with code you don't trust?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Jerry Leichter
On Sep 24, 2013, at 6:11 PM, Gerardus Hendricks konfku...@riseup.net wrote:
 I'm assuming you're talking about DUAL_EC_DBRG. ... According to the 
 researchers from Microsoft, exploiting this would require
 at most 32 bytes of the PRNG output to reveal the internal state, thus
 revealing all random numbers generated with the same instantiation of the
 PRNG from that moment on.  Would I be correct to say that backdoors with such 
 properties cannot exist in PRNGs based on symmetric crypto or hashing 
 functions?
Well, that depends on how they are used and what you believe about the 
primitives.

If you use encryption in counter mode - E(k,counter), where k is random - then 
the assumption that the generated values are random is, as I remarked in 
another comment, pretty much equivalent to the indistinguishability assumptions 
that are commonly made about symmetric cryptographic algorithms.  If you don't 
think you have an appropriately secure symmetric cipher to work with ... it's 
not clear just what you're going to *do* with your random numbers anyway.

It's harder to know what to make of hashing approaches because it depends on 
the hash algorithm and what you believe about *it*.  For most uses, a 
cryptographic hash function just has to prevent first- and second-preimage 
attacks.  If that's all you are willing to assume your hash function provides, 
it's enough for the standard, intended uses of such hashes, but not enough to 
prove much more.  (For example, nothing in these two assumptions, on its own, 
says that the hash function can't always produce an output whose first bit is 
0.)  People generally do assume more, but you really have to be explicit.

 Either way, the question is how to stop this side channel attack. One
 simple way would be to encrypt the nonces from the RNG under a secret
 key
 generated in some other fashion.
 
 That seems silly. You are just shifting the responsibility from the PRNG
 to the cipher and its (random) key/seed. In any case, the result is just a
 new, needlessly complex PRNG, since you might just as well encrypt zeroes.
Indeed - though you could safely reuse the random key in counter mode up to 
some limit determined by the security guarantees of the cipher, and if the 
encryption function lives up to its guarantees, you'll still be secure.  
Stretching a short random key into a long effectively random sequence is 
exactly what symmetric encryption functions *do*!
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What is Intel® Core™ vPro™ Technology Animation

2013-09-24 Thread Jerry Leichter
On Sep 21, 2013, at 10:05 PM, d.nix wrote:
 Hah hah hah. Uh, reading between the lines, color me *skeptical* that
 this is really what it claims to be, given the current understanding
 of things...
 
 http://www.intel.com/content/www/us/en/enterprise-security/what-is-vpro-technology-video.html
The question isn't whether it's what it claims to be.  It is that.  But is it's 
*more* than it claims to be.

There are a whole bunch of things in recent Intel chips to provide 
manageability and security.  And there are cases where this is very valuable 
and necessary - e.g., if you have a large cluster or processors, it's good to 
be able to remotely configure them no matter what state they are in.  There are 
many similar examples.  If it's *your* hardware, *your* ability to control it, 
in detail, is a good thing.  (Yes, if you've been lent the hardware by your 
employer, it's the *employer* who's the owner, not you, and it's the *employer* 
who can do what he likes.  This has always been the case to a large degree.  If 
it makes you uncomfortable - buy your own machine, don't use your work machine 
for non-work things.)

The *theory* is that the owner can enable or disable these features, and has 
the keys to access them if enabled.  What we don't know is whether anyone else 
has a back-door key.  The phrase I always use to describe such situations is 
if there's a mode, there's a failure mode.  Such technology could have been 
present in previous generations of chips, completely invisibly - but it would 
have required significant effort on Intel's part with no real payback.  But 
once Intel is adding this stuff anyway ... well, it's only a small effort to 
provide a special additional back door access.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What is Intel® Core™ vPro™ Technology Animation

2013-09-24 Thread Jerry Leichter
On Sep 22, 2013, at 7:56 PM, d.nix wrote:
 ...If for example, the paper regarding manipulating the RNG circuit by
 alternate chip doping is valid, then an adversary with deep pockets
 and vast resources might well be able remotely target specific systems
 on demand. Possibly even air gapped ones if this function is
 controllable via a 3G signal as I have read elsewhere.
 
 Or perhaps just outright reroute and tap information prior to
 encryption, or subtly corrupt things in other ways such that processes
 fail or leak data
You started off concerned about misuse of a remote override function that 
Intel deliberately puts on the chips - a valid concern - but now have wandered 
off into arbitrary chip modifications.  Those, too, are perhaps valid concerns 
- but they've been concerns for many years.  Nothing new here, except that the 
deeper we look, the more ways we find to hide attacks within the hardware.

That said, the doping paper, if I understood the suggestion correctly, 
discussed a way to modify individual chips, not whole runs of them.  
(Presumably you could modify whole runs by spiking the production process, but 
that would be difficult to hide:  Chip manufacturing is by its nature a very 
tightly controlled process, and an extra step isn't something that people would 
miss.  It would probably even show up in the very tightly watched yield 
statistics:  The extra step would delay wafers on the line, which would cause 
the yield to drop.  The beauty of the doping attack is that it's undetectable - 
at least right now; for every attack, a defense; for every defense, an attack.  
But exactly how one might make the *implementation* of the attack undetectable 
isn't at all clear.)

 H. Maybe time to pull my old 1996 SGI R10K and R4400 boxes out of
 storage. For a few *very* dedicated and air gapped tasks they might be
 a small measure of worthwhile trouble.
You'll be amazed at how slow they now seem

Still, it raises the question:  If you can't trust your microprocessor chips, 
what do you do?  One possible answer:  Build yourself a processor out of MSI 
chips.  We used to do that, not so long ago, and got respectable performance 
(if not, perhaps, on anything like today's scale).  An MSI chip doesn't have 
enough intrinsic computation to provide much of a hook for an attack.  Oh, 
sure, the hardware could be spiked - but to do *what*?  Any given type of MSI 
chip could go into many different points of many different circuit topologies, 
and won't see enough of the data to do much anyway.  There may be some 
interface issues:  This stuff might not be fast enough to deal with modern 
memory chips.  (How would you attack a memory chip?  Certainly possible if 
you're make a targeted attack - you can slip in a small processor in the design 
to do all kinds of nasty things.  But commercial of the shelf memory chips are 
built right up to the edge of what we can make, so you can't change a
 ll that much.)

Some stuff is probably just impossible with this level of technology.  I doubt 
you can build a Gig-E Ethernet interface without large-scale integration.  You 
can certainly do the original 10 Mb/sec - after all, people did!  I have no 
idea if you could get to 100 Mb/sec.

Do people still make bit-slice chips?  Are they at a low-enough level to not be 
a plausible attack vector?

You could certainly build a respectable mail server this way - though it's 
probably not doing 2048-bit RSA at a usable speed.

We've been talking about crypto (math) and coding (software).  Frankly, I, 
personally, have no need to worry about someone attacking my hardware, and 
that's probably true of most people.  But it's *not* true of everyone.  So 
thinking about how to build harder to attack hardware is probably worth the 
effort.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-22 Thread Jerry Leichter
On Sep 20, 2013, at 2:08 PM, Ray Dillinger wrote:
 More fuel for the fire...
 
 http://rt.com/usa/nsa-weak-cryptography-rsa-110/
 
 RSA today declared its own BSAFE toolkit and all versions of its
 Data Protection Manager insecure, recommending that all customers
 immediately discontinue use of these products
Wow.  You took as holy writ on a technical matter a pronouncement of the 
general press.  And not just of the general press, but of RT, a Russian 
publication with a clear pro-Russian anti-American bias.  (Not that they don't 
deliver useful information - I've read them in the past, along with Al Jazeera 
and, on the flip side, The Debka Files.  They are all valid and useful members 
of the press, but their points of view, AKA biases, are hardly a secret.)

The original article in Wired is still a bit sensationalistic, but at least it 
gets the facts right.

BSAFE is a group of related libraries of crypto primitives that RSA has sold 
for many years.  They generally implement everything in the relevant standards, 
and sometimes go beyond that and include stuff that seems to be widely accepted 
and used.  Naturally, they also use the same libraries in some packaged 
products they sell.  

As far as I know, the BSAFE implementations have been reasonably well thought 
of over the years.  In my experience, they are conservatively written - they 
won't be the fastest possible implementations, but they'll hold their own, and 
they probably are relatively bug-free.  A big sales advantage is that they come 
with FIPS certifications.  For whatever you think those are actually worth, 
they are required for all kinds of purposes, especially if you sell products to 
the government.

I remember looking at BSAFE for use in a product I worked on many years ago.  
We ended up deciding it was too expensive and used a combination of open source 
code and some stuff we wrote ourselves.  The company was eventually acquired by 
EMC (which also owns RSA), and I suspect our code was eventually replaced with 
BSAFE code.

Since Dual EC DRBG was in the NIST standards, BSAFE provided an implementation 
- along with five other random number generators.  But they made Dual EC DRBG 
the default, for reasons they haven't really explained beyond in 2004 [when 
these implementations were first done] elliptic curve algorithms were becoming 
the rage and were considered to have advantages over other algorithms

I'd guess that no one remembers now, six or more years later, exactly how Dual 
EC DRBG came to be the default.  We now suspect that a certain government 
agency probably worked to tip the scales.  Whether it was directly through some 
contacts at RSA, or indirectly - big government client says they want to buy 
the product but it must be safe by default, and Dual EC DRBG is what our 
security guys say is safe - who knows; but the effect is the same.  (If there 
are any other default choices in BSAFE, they would be worth a close look.  
Influencing choices at this level would have huge leverage for a non-existent 
agency)

Anyway, RSA has *not* recommended that people stop using BSAFE.  They've 
recommended that they stop using *Dual DC DRBG*, and instead select one of the 
other algorithms.  For their own embedded products, RSA will have to do the 
work.  Existing customers most likely will have to change their source code and 
ship a new release - few are likely to make the RNG externally controllable.  
Presumably RSA will also issue new versions of the BSAFE products in which the 
default is different.  But it'll take years to clean all this stuff up.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Some (limited) info about Apple A7 security for fingerprints, keychains

2013-09-18 Thread Jerry Leichter
A level beyond marketing talk, but nowhere near technical detail.  Still a bit 
more than has been previously described.  Links to some perhap
http://www.quora.com/Apple-Secure-Enclave/What-is-Apple%E2%80%99s-new-Secure-Enclave-and-why-is-it-important

There's a link to an ARM site with a reasonable overview of the ARM TEE 
(Trusted Execution Environment) - which Apple's Secure Enclave may (or may 
not) be based on.  
http://www.arm.com/products/processors/technologies/trustzone.php

Referring back to a point Perry made a while back:  TEE mode runs its own 
specialized secure OS.  That would seem to be an ideal target for 
verification
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] paranoid cryptoplumbing is a probably not defending the weakest point

2013-09-17 Thread Jerry Leichter
On Sep 17, 2013, at 11:54 AM, Perry E. Metzger pe...@piermont.com wrote:
 I'd like to note quite strongly that (with certain exceptions like
 RC4) the odds of wholesale failures in ciphers seem rather small
 compared to the odds of systems problems like bad random number
 generators, sabotaged accelerator hardware, stolen keys, etc., and a
 smart attacker goes for the points of weakness
Actually, I think there is a potentially interesting issue here:  RC4 is faster 
and requires significantly fewer resources than modern block ciphers.  As a 
result, people would really like to use it - and actually they *will* continue 
to use it even in the face of the known attacks (which, *so far*, are hardly 
fatal except in specialized settings).

So ... is there some simple way of combining RC4 with *something* that 
maintains the performance while retaining the speed?  How about two separate 
RC4's (with independent keys) XOR'ed together?  That would still be 
considerably faster than AES.

There appear to be two general classes of known attacks:

1.  The initial key setup doesn't produce enough randomness;
2.  There are long-term biases in the output bytes.

The first of these can be eliminated by using AES to generate values to 
scramble the internal state.  The second can be hidden by doing post-whitening, 
XOR'ing in a byte from AES in (say) counter mode.  If you use a given byte 64 
times, then use the next byte of the output, you pay 1/64 the cost of actually 
using AES in counter mode, but any bias in the output would have to play out 
over a 64-byte segment.  (Actually, if you use ideas from other stream ciphers, 
changing the whitening every 64 bytes probably isn't right - you want the 
attacker to have to guess where the changeovers take place.  There are many 
ways to do that.)

Of course, don't take any of the above and go build code.  It's just 
speculation and likely has serious problems.  I toss it out to illustrate the 
idea.  Whether it's actually worthwhile ... I doubt it, but it's worth thinking 
about.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread Jerry Leichter
On Sep 17, 2013, at 5:49 AM, ianG i...@iang.org wrote:
 
 I wish there was a term for this sort of design in encryption systems
 beyond just defense in depth. AFAICT there is not such a term.
 
 How about the Failsafe Principle? ;)
 
 A good question.  In my work, I've generally modelled it such that the entire 
 system still works if one algorithm fails totally.  But I don't have a name 
 for that approach.
How about the X Property (Trust No One - with a different slant on One)?

-- Jerry :-)

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-17 Thread Jerry Leichter
On Sep 17, 2013, at 6:21 PM, John Kelsey crypto@gmail.com wrote:
 I confess I'm not sure what the current state of research is on MAC
 then Encrypt vs. Encrypt then MAC -- you may want to check on that.
 
 Encrypt then MAC has a couple of big advantages centering around the idea 
 that you don't have to worry about reaction attacks, where I send you a 
 possibly malformed ciphertext and your response (error message, acceptance, 
 or even time differences in when you send an error message) tells me 
 something about your secret internal state.  
On a purely practical level, to reject a damaged message, with decrypt-then-MAC 
(ordering things on the receiver's side...) I have to pay the cost of a 
decryption plus a MAC computation; with MAC-then-decrypt, I only pay the cost 
of the MAC.  On top of this, decryption is often more expensive than MAC 
computation.  So decrypt-then-MAC makes DOS attacks easier.

One could also imagine side-channel attacks triggered by chosen ciphertext.  
Decrypt-then-MAC allows an attacker to trigger them; MAC-then-decrypt does not. 
(Attacks on MAC's seems somewhat less likely to be data dependent, but who 
knows for sure.  In any case, even if you had such an attack, it would get you 
the authentication key - and at that point you would be able to *start* your 
attack not the decryption key.

MAC'ing the actual data always seemed more logical to me, but once you look 
at the actual situation, it no longer seems like the right thing to do.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Jerry Leichter
On Sep 16, 2013, at 6:20 PM, Bill Frantz wrote:
 Joux's paper Multicollisions in iterated hash functions 
 http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
 shows that finding ... r-tuples of messages that all hash to the same value 
 is not much harder than finding ... pairs of messages.  This has some 
 surprising implications.  In particular, Joux uses it to show that, if F(X) 
 and G(X) are cryptographic hash functions, then H(X) = F(X) || G(X) (|| is 
 concatenation) is about as hard as the harder of F and G - but no harder.
 This kind of result is why us crypto plumbers should always consult real 
 cryptographers. :-)
Yes, this is the kind of thing that makes crypto fun.

The feeling these days among those who do such work is that unless you're going 
to use a specialized combined encryption and authentication mode, you might as 
well use counter mode (with, of course, required authentication).  For the 
encryption part, counter mode with multiple ciphers and independent keys has 
the nice property that it's trivially as strong as the strongest of the 
constituents.  (Proof:  If all the ciphers except one are cracked, the attacker 
is left with a known-plaintext attack against the remaining one.  The need for 
independent keys is clear since if I use two copies of the same cipher with the 
same key, I end up sending plaintext!  You'd need some strong independence 
statements about the ciphers in the set if you want to reuse keys.  Deriving 
them from a common key with a one-way hash function is probably safe in 
practice, though you'd now need some strong statements about the hash function 
to get any theoretical result.  Why rely on such things when you 
 don't need to?)

It's not immediately clear to me what the right procedure for multiple 
authentication is.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-15 Thread Jerry Leichter
On Sep 14, 2013, at 5:38 PM, Kent Borg wrote:
 Things like clock skew are usually nothing but squish ... not reliably 
 predictable, but also not reliably unpredictable. I'm not interested in 
 squish, and I'm not interested in speculation about things that might be 
 random. 
 
 I see theoretical the enemy of the good here.
 
 The term squish is entertaining, but be careful that once you paint away 
 with your broad brush that you don't dismiss engineering realities that 
 matter.

 And once we have built such vaguely secure systems, why reject entropy 
 sources within those systems, merely because they you think they look like 
 squish?  If there is a random component, why toss it out?  You seem to 
 respect using hashing to condition and stretch entropy--though why any 
 existing hash shouldn't also fall to your squish generalization, I don't 
 know.
You've completely missed what Denker was getting at with squish.  Squish 
never applies to a fully characterized, deterministic component like a hash.  
Squish is an unknown unknown:  Data that you don't understand, so you think 
it might be random, but you really can't be sure.  Consider the example he 
responded to, that comparing the clocks on the CPU and on the sound card 
should be usable as a source of randomness.  If you dig in to what should be 
usable means here, it comes down to:  Both clocks show some degree of random 
variation, and the random variation is uncorrelated.  That might be true - or 
it might not:  Perhaps there's some path you haven't thought of through the 
power supply that tends to synchronize the two.  Lack of imagination on the 
analyst's part does not equate to lack of correlation (or other failure modes) 
on the system's part!  (In fact, the world is full of unexpected couplings 
between nominally independent events.  I've debugged and fought f
 ailures in systems built on the unsupported assumption that things will 
smooth out on average.  They are always unexpected, and can be difficult to 
find after the fact.  And ... people don't seem to learn the lesson:  The next 
system makes the same bad assumptions.)

As Denker said:  Adding squish as a source of confusion in a well implemented 
mixer is at worst harmless - if you want to do it, go ahead.  But adding it as 
a source of an entropy estimate is wrong.  Either you have some way of 
estimating the entropy based on real physical modeling, or you're just making 
things up - and just making things up is not the way to build a secure system.
   -- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Jerry Leichter
On Sep 12, 2013, at 11:06 PM, Marcus D. Leech wrote:
 There are a class of hyper-cheap USB audio dongles with very uncomplicated 
 mixer models.  A small flotilla of those might get you some fault-tolerance.
  My main thought on such things relates to servers, where power consumption 
 isn't really much of an issue
I'm not sure what servers you're talking about here.

If by server you mean one of those things in a rack at Amazon or Google or 
Rackspace - power consumption, and its consequence, cooling - is *the* major 
issue these days.  Also, the servers used in such data centers don't have 
multiple free USB inputs - they may not have any.

If by server you mean some quite low-power box in someone's home ... power is 
again an issue.  People want these things small, fan-free, and dead reliable.  
And they are increasingly aware of the electric bills always-on devices produce.

About the only server for which power is not an issue is one of those 
extra-large desktops that small businesses use.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-13 Thread Jerry Leichter
[Perry - this is likely getting too far off-topic, but I've included the list 
just in case you feel otherwise.  -- J]

On Sep 12, 2013, at 12:53 AM, Andrew W. Donoho a...@ddg.com wrote:

 
 On Sep 11, 2013, at 12:13 , Jerry Leichter leich...@lrw.com wrote:
 
 On Sep 11, 2013, at 9:16 AM, Andrew W. Donoho a...@ddg.com wrote:
 Yesterday, Apple made the bold, unaudited claim that it will never save the 
 fingerprint data outside of the A7 chip.
 By announcing it publicly, they put themselves on the line for lawsuits and 
 regulatory actions all over the world if they've lied.
 
 Realistically, what would you audit?  All the hardware?  All the software, 
 including all subsequent versions?
 Jerry,
 
 
 
   First I would audit that their open source security libraries, which 
 every app has to use, are the same as I can compile from sources.
Well ... there's an interesting point here.  If it's an open source library - 
what stops you from auditing it today?

On OSX, at least, if I were to worry about this, I'd just replace the libraries 
with my own compiled versions.  Apple has a long history of being really slow 
about updating the versions of open source software they use.  Things have 
gotten a bit better, but often through an odd path:  Apple gave up on 
maintaining their own release of Java and dumped the responsibility back on 
Oracle (who've been doing a pretty miserable job on the security front).  For 
the last couple of years, Apple distributed a X client which was always behind 
the times - and there was an open source effort, XQuartz, which provided more 
up to date versions.  Recently, Apple decided to have people pull down the 
XQuartz version.  (For both Java and X, they make the process very 
straightforward for users - but the important point is that they're simply 
giving you access to someone else's code.)  They've gone this way with a 
non-open source component as well, of course - Flash.  They never built it 
themselves;
  now they don't even give you help in downloading it.

But ... suppose you replaced, say, OpenSSL with your own audited copy.  There 
are plenty of ways for code that *uses* it to leak information, or just misuse 
the library.  On top of that, we're mainly talking user-level code.  Much of 
the privileged code is closed source.

So ... I'm not sure were auditing gets you.  If you're really concerned about 
security, you need to use trusted code throughout.  A perfectly reasonable 
choice, perhaps - though having regularly used both Linux and MacOS, I'd much 
rather use MacOS (and give up on some level of trustworthiness) for many kinds 
of things.  (There are other things - mainly having to do with development and 
data analysis - that I'd rather do on Linux.)

 Second, the keychain on iOS devices is entirely too mysterious for this iOS 
 developer. This needs some public light shone on it. What exactly is the 
 relationship between the software stack and the ARM TPM-equivalent.
I agree with you.  I really wish they'd make (much) more information available 
about this.  But none of this is open source.  I've seen some analysis done by 
people who seem to know their stuff, and the way keys are *currently* kept on 
iOS devices is pretty good:  They are encrypted using device-specific data 
that's hard to get off the device, and the decrypted versions are destroyed 
when the device locks.  But there are inherent limits as what can be done here: 
 If you want the device to keep receiving mail when it's locked, you have to 
keep the keys used to connect to mail servers around even when it's locked.

 Third, in iOS 7, I can make a single line change and start syncing my 
 customer's keychain data through iCloud. At WWDC this year, Apple did not 
 disclose how they keep these keys secure. (As it is a busy conference, I may 
 have missed it.)
Keychain syncing was part of the old .Mac stuff, and in that case it was clear: 
They simply synced the keychain files.  As I said, there is some information 
out there about how those are secured, and as best I've been able to determine, 
they are OK.  I wish more information was available.

It's not clear whether iCloud will do something different.  Apparently Apple 
removed keychain syncing from the final pre-release version of iOS 7 - it's now 
marked as coming soon.  The suspicion is that in the post-Snowden era, they 
decided they need to do something more to get people to trust it.  (Or, I 
suppose, they may actually have found a bug)  We'll see what happens when 
they finally turn it on.

 Fourth, does Apple everywhere use the same crypto libraries as developers are 
 required to use?
Developers aren't *required* to use any particular API's.  Could there be some 
additional crypto libraries that they've kept private?  There's no way to know, 
but it's not clear why they would bother.  The issue is presumably that NSA 
might force them to include a back door in the user-visible libraries - but 
what would Apple gain, beyond

Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 9:16 AM, Andrew W. Donoho a...@ddg.com wrote:
 Yesterday, Apple made the bold, unaudited claim that it will never save the 
 fingerprint data outside of the A7 chip.
By announcing it publicly, they put themselves on the line for lawsuits and 
regulatory actions all over the world if they've lied.

Realistically, what would you audit?  All the hardware?  All the software, 
including all subsequent versions?

This is about as strong an assurance as you could get from anything short of 
hardware and software you build yourself from very simple parts.

 Why should we trust Cook  Co.? They are subject to the laws of the land and 
 will properly respond to lawful subpoenas. What are they doing to ensure the 
 user's confidence that they cannot spread my fingerprint data to the cloud?
Apparently not enough to give *you* confidence.  But concerned as I am with 
recent revelations, it doesn't particularly concern *me* nearly as much as many 
other attack modalities.

 These questions also apply to things like keychain storage. Who has audited 
 in a public fashion that Apple actually keeps keychains secure?
There's been some very limited auditing by outsiders.  I found one paper a 
while back that teased apart the format of the file and figured out how the 
encryption worked.  It appeared to be secure (if perhaps overly complicated), 
but damned if I can find the paper again.  (Searching these days turns up tons 
of articles that center about the fact that when a keychain is unlocked, you 
can read its contents.  The vulnerability issues are subtle, but they only 
apply at all if you're on the same machine as the unlocked keychain.)

It would be a nice thing if Apple described the algorithms used to encrypt 
keychains.  Perhaps this is the time to push them - and others - to be much 
more open about their security technologies.  Apple seems to be making a point 
of *selling* on the basis of those technologies, so may be particularly 
willing/vulnerable on this front.

 How do we know whether Apple has perverted under secret court order the 
 common crypto and other libraries in every phone and iPad?...
You don't.

Then again, you don't know if Intel has been forced to include something in its 
chips that allows someone with appropriate knowledge to download and run 
privileged code on your machine.  All modern Intel server chips include a 
special management mode exactly to allow remote control over servers in a large 
datacenter, regardless of how screwed up the software, including the OS 
software, on them gets.  Who's to say there isn't some other way to get into 
that code?

Who you choose to trust and how much is ultimately your call.  There are no 
answers to your questions.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 1:44 PM, Tim Dierks t...@dierks.org wrote:
 When it comes to litigation or actual examination, it's been demonstrated 
 again and again that people can hide behind their own definitions of terms 
 that you thought were self-evident. For example, the NSA's definition of 
 target, collect, etc., which fly in the fact of common understanding, and 
 exploit the loopholes in English discourse. People can lie to you without 
 actually uttering a demonstrable falsehood or exposing themselves to 
 liability, unless you have the ability to cross-example the assertions.
I wouldn't take it quite that far.  Government agencies always claim broader 
leeway than is granted to private actors - and even for NSA and friends, 
exactly how the courts will react to that language parsing isn't clear.  Even 
their pet FISA court, we now know from declassified documents, has angrily 
rejected some of this game playing - a second report of this is in today's New 
York Times.  Not that it did much good in terms of changing behavior.  And 
Congress, of course, can interpret stuff as it likes.

The standard for civil lawsuits and even more so for regulatory actions is 
quite a bit lower.  If Apple says no fingerprint information leaves the 
phone, it's going to be interpreted that way.  Another article in today's 
Times reports that Google has had its argument that WiFi is radio hence not 
subject to wiretap laws soundly rejected by an appeals court.

 I don't have a precise cite for the Apple claim...
http://www.youtube.com/watch?v=TJkmc8-eyvE, starting at about 2:20, is one 
statement.  I won't try to transcribe it here, but short of a technical paper, 
it's about a strong and direct a statement as you're going to get.  People have 
a queasy feeling about fingerprint recognition.  If Apple wants to get them to 
use it, they have to reassure them.  It's a basic result of game theory that 
the only way you can get people to believe you is to put yourself in harm's way 
if you lie.

 ...[I]s the data indirectly accessible?
What counts as indirect access?  In one sense, the answer to this question is 
yes:  You can use your fingerprint to authorize purchases - at least from the 
iTunes/App store.  It's completely unclear - and Apple really should explain - 
how the authentication flows work.  Since they promise to keep your fingerprint 
information on the phone, they can't be sending it off to their store servers.  
On the other hand, if it's a simple go ahead and authorize a charge for user 
U message, what's to prevent someone from faking such a message?  (In other 
words:  If the decision is made on the phone, how do you authenticate the 
phone?)

 ...Unless you can cross-examine the assertions with some kind of penalty for 
 dissembling, you can't be sure that an assertion means what you think or hope 
 it means, regardless of how straightforward and direct it sounds.
Courts are not nearly as willing to let those before them hide behind 
complicated and unusual word constructions as you think - especially not when 
dealing with consumers.  It is, in fact, a standard rule of contract law that 
any ambiguity is to be interpreted contrary to the interests of the drafter.

None of this is specific to Apple.  Commerce depends on trust, enforced 
ultimately by courts of law.  The ability to govern depends on the consent of 
the governed - yes, even in dictatorships; they survive as long as enough of 
the population grudgingly accedes, and get toppled when they lose too much of 
the consent.  And consent ultimately also requires trust.

The games the intelligence community have been playing are extremely corrosive 
to that trust, which is why they are so dangerous.  But we have to get beyond 
that and not see attackers behind every door.  The guys who need to be stopped 
run the NSA and friends, not Apple or Facebook or Google - or even the Telco's.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 5:57 PM, Nemo n...@self-evident.org wrote:
 The older literature requires that the IV be unpredictable (an
 ill-defined term), but in fact if you want any kind of security proofs
 for CBC, it must actually be random.
 
 Wrong, according to the Rogaway paper you cited.  Pull up
 http://www.cs.ucdavis.edu/~rogaway/papers/modes.pdf and read the last
 paragraph of section I.6 (pages 20-21).  Excerpt:
 
We concur, without trying to formally show theorems, that all of the
SP 800-38A modes that are secure as probabilistic encryption schemes
-- namely, CBC, CFB, and OFB -- will remain secure if the IV is not
perfectly random, but only unguessable.
The real problem is that unpredictable has no definition.  E(0) with the 
session key is unpredictable to an attacker, but as the paper shows, it 
cannot safely be used for the IV.  Rogoway specifically says that if what you 
mean by unpredictable is random but biased (very informally), then you lose 
some security in proportion to the degree of bias:  A quantitative statement 
of such results would “give up” in the ind$ advantage an amount proportional to 
the ε(q, t) value defined above.

 I do not think we will too find much guidance from the academic side on 
 [secret IV's], because they tend to assume a can opener... Er, I mean a 
 secure block cipher... And given that assumption, all of the usual modes 
 are provably secure with cleartext IVs.
 
 Incorrect on multiple levels.  See the paper I mentioned in my
 response to Perry.
 
 If you are going to call me wrong in a public forum, please have the
 courtesy to be specific. My statement was, in fact, correct in every
 detail.
 
 To rephrase:
I actually have no problem with your rephrased statement.  My concern was the 
apparently flippant dismissal of all academic work as assuming a can 
opener.  Yes, there's some like that.  There's also some that shows how given 
weaker assumptions you can create a provably secure block cipher (though in 
practice it's not clear to me that any real block cipher is really created that 
way).  Beyond that, provably secure is slippery - there are many, many 
notions of security.  Rogoway's paper gives a particular definition for 
secure and does indeed show that if you have a random IV, CBC attains it.  
But he also points out that that's a very weak definition of secure - but 
without authentication, you can't get any more.

Do I wish we had a way to prove something secure without assumptions beyond 
basic mathematics?  Absolutely; everyone would love to see that.  But we have 
no idea how to do it.  All we can do is follow the traditional path of 
mathematics and (a) make the assumptions as clear, simple, limited, and 
obvious as possible; (b) show what happens as the assumptions are relaxed or, 
sometimes, strengthened.  That's what you find in the good cryptographic work.  
(BTW, if you think I'm defending my own work here - far from it.  I left 
academia and theoretical work behind a very long time ago - I've been a 
nuts-and-bolts systems guy for decades.)

On the matter of a secret IV:  It can't actually help much.  Any suffix of a 
CBC encryption (treated as a sequence of blocks, not bytes) is itself a valid 
CBC encryption.  Considered on its own, it has a secret IV; considered in the 
context of the immediately preceding block, it has a non-secret IV.  So a 
secret IV *at most* protects the very first block of the message.  I doubt 
anyone has tried to formalized just how much it might help simply because it's 
so small. 

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Killing two IV related birds with one stone

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 6:51 PM, Perry E. Metzger wrote:
 It occurs to me that specifying IVs for CBC mode in protocols
 like IPsec, TLS, etc. be generated by using a block cipher in counter
 mode and that the IVs be implicit rather than transmitted kills two
 birds with one stone.
Of course, now you're going to need to agree on two keys - one for the main 
cipher, one of the IV-generating cipher.  Seems like a great deal of trouble to 
go to to rescue a mode with few advantages.  (Perry and I exchanged some 
private mail on this subject.  He claims CBC has an advantage over CTR because 
CTR allows you to deterministically modify the plaintext under the 
encryption.  I used to favor CBC for that reason as well, though in fact you 
can modify the text anyway by replaying a previous block - it's just harder to 
control.  I've become convinced, though, the CBC without authentication is way 
too insecure to use.  Once you require authentication, CBC has no advantages I 
can see over CTR.)

But if you insist on CBC ... it's not clear to me whether the attack in 
Rogoway's paper goes through once authentication is added.  If it doesn't, E(0) 
does just fine (and of course doesn't have to be transmitted).

 ...Note that if you still transmit the IVs, a misimplemented client
 could still interoperate with a malicious counterparty that did not
 use the enforced method for IV calculation. If you don't transmit
 the IVs at all but calculate them, the system will not interoperate if
 the implicit IVs aren't calculated the same way by both sides, thus
 ensuring that the covert channel is closed.
Ah, but where did the session and IV-generating keys come from?  The same 
random generator you now don't trust to directly give you an IV?

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-10 Thread Jerry Leichter
On Sep 9, 2013, at 9:17 AM, Kent Borg wrote:
 Which brings into the light the question:  Just *why* have so many random 
 number generators proved to be so weak.
 
 Your three cases left off an important one: Not bothering to seed the PRNG at 
 all.  I think the Java/Android cryptographic (!) library bug that just came 
 up was an instance of that.
 
 I think the root of the problem is that programs are written, and bugs 
 squashed, until the program works. Maybe throw some additional testing at it 
 if we are being thorough, but then business pressures and boredom says ship 
 it.
 
 That won't catch a PRNG that wasn't seeded, nor a hashed password that wasn't 
 salted, the unprotected URL, the SQL injection path, buffer overflow, etc.
Good observations, but I think you're being too pessimistic.  All the examples 
you give *could* be tested - but not by ignorant black box testing - testing 
that ignores not just what's inside the box, but the actual requirements on 
what the box is supposed to produce.  A non-seeded PRNG, and even one seeded 
with a very small amount of entropy, will be caught by a test that runs 
multiple instances of the PRNG from the system starting state and ensures that 
the ensemble of first outputs (and, for good measure, the first *couple* of 
outputs) has the right statistics.  Similarly, a test that inserts the same 
password into multiple instances of the same system in the same state can check 
that the hashed versions have the right statistics.  No, these can't catch 
deliberate attack code which produces random-looking values that the attacker 
can predict - no test can.  But it will catch a broad class of common errors.

The others aren't cryptographic issues and require different approaches.

The fact that there are bad testing practices - and that those bad practices 
are used all too often - doesn't mean there aren't good practices, and that 
they could not be applied.  Where the testing is bad because of ignorance of 
what is actually important and how to test for it, learning from the failures 
of the past is the way forward - which was exactly the point of PRMG failures 
classification.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The One True Cipher Suite

2013-09-10 Thread Jerry Leichter
On Sep 9, 2013, at 12:00 PM, Phillip Hallam-Baker wrote:
 Steve Bellovin has made the same argument and I agree with it. Proliferation 
 of cipher suites is not helpful. 
 
 The point I make is that adding a strong cipher does not make you more 
 secure. Only removing the option of using weak ciphers makes you more secure.
I'm not so sure I agree.  You have to consider the monoculture problem, 
combined with the threat you are defending against.

The large burst of discussion on this list was set off by Perry's request for 
ways to protect against the kinds of broad-scale, gather-everything attacks 
that Snowden has told us the NSA is doing.  So consider things from the side of 
someone attempting to mount this kind of attack:

1.  If everyone uses the same cipher, the attacker need only attack that one 
cipher.
2.  If there are thousands of ciphers in use, the attacker needs to attack some 
large fraction of them.

As a defender, if I go route 1, I'd better be really, really, really sure that 
my cipher won't fall to any attacks over its operational lifetime - which, if 
it's really universal, will extend many years *even beyond a point where a 
weakness is found*.

On the other hand, even if most of the ciphers in my suite are only moderately 
strong, the chance of any particular one of them having been compromised is 
lower.

This is an *ensemble* argument, not an *individual* argument.  If I'm facing an 
attacker who is concentrating on my messages in particular, then I want the 
strongest cipher I can find.

Another way of looking at this is that Many Ciphers trades higher partial 
failure probabilities for lower total/catastrophic failure probabilities.

Two things are definitely true, however:

1.  If you don't remove ciphers that are found to be bad, you will eventually 
pollute your ensemble to the point of uselessness.  If you want to go the many 
different ciphers approach, you *must* have an effective way to do this.

2.  There must be a large set of potentially good ciphers out there to choose 
from.  I contend that we're actually in a position to create reasonably good 
block ciphers fairly easily.  Look at the AES process.  Of the 15 round 1 
candidates, a full third made it to the final round - which means that no 
significant attacks against them were known.  Some of the rejected ones failed 
due to minor certificational weaknesses - weaknesses that should certainly 
lead you not to want to choose them as the One True Cipher, but which would 
in and of themselves not render breaking them trivial.  And, frankly, for most 
purposes, any of the five finalists would have been fine - much of the final 
choice was made for performance reasons.  (And, considering Dan Bernstein's 
work on timing attacks based on table lookups, the performance choices made bad 
assumptions about actual hardware!)

I see no reason not to double-encrypt, using different keys and any two 
algorithms from the ensemble.  Yes, meet-in-the-middle attacks mean this isn't 
nearly as strong as you might naively think, but it ups the resource demands on 
the attacker much more than on the defender.

Now, you can argue that AES - the only cipher really in the running for the One 
True Symmetric Cipher position at the moment - is so strong that worrying about 
attacks on it is pointless - the weaknesses are elsewhere.  I'm on the fence 
about that; it's hard to know.  But if you're going to argue for One True 
Cipher, you must be explicit about this (inherently unprovable) assumption; 
without it your argument fails.

The situation is much more worse for the asymmetric case:  We only have a few 
alternatives available and there are many correlations among their potential 
weaknesses, so the Many Ciphers approach isn't currently practical, so there's 
really nothing to debate at this point.

Finally, I'll point out that what we know publicly about NSA practices says 
that (a) they believe in multiple ciphers for different purposes; (b) they 
believe in the strength of AES, but only up to a certain point.  At this point, 
I'd be very leery of taking anything NSA says or reveals about it practices at 
face value, but there it is.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-10 Thread Jerry Leichter
On Sep 10, 2013, at 5:49 PM, Perry E. Metzger pe...@piermont.com wrote:
 Phil Rogoway has a paper somewhere discussing the right way to
 implement cryptographic modes and API's.
 
 It would be useful to get a URL for it.
 
 In particular, he recommends changing the definition of CBC...to
 
 E_0 = E(IV)  # Not transmitted
 E_{i+1} = E(E_i XOR P_{i+1})
 
 You make no mention there of whether the key used to encrypt the IV
 is the same as that used for the plaintext.
As I recall the proposal, it was the same key.  (Generating a different one for 
this purpose is pointless - it would have to be random, in which case you might 
as well generate the IV randomly.)

 I presume if you need a lot of IVs (see protocols like IPsec), and have 
 enough key material, a second key might be of value for that -- but I don't 
 know what all the ins and outs are, and would prefer to read the literature...
I searched around but couldn't find it; the proposal apparently was not 
Rogoway's.  It apparently appears in NIST 800-38A (2001), with no citation.  In 
searching around, I came across a recent, unpublished paper by Rogoway:  
http://www.cs.ucdavis.edu/~rogaway/papers/modes.pdf
That paper - which does detailed analyses of a large number of modes - 
indicates that more recent work has shown that this technique for choosing an 
IV is *not* secure (against a certain class of attacks) and recommends against 
using it.

I highly recommend that paper.  In fact, I highly recommend everything Rogoway 
has written.  We've been discussing authentication and session key exchange - 
he and Bellare wrote about the problem in 1993 
http://cseweb.ucsd.edu/users/mihir/papers/eakd.pdf giving provably secure 
algorithms for the 2-party case, and two years later 
http://www.cs.ucdavis.edu/~rogaway/papers/3pkd.pdf extending the work to the 
3-party case.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-10 Thread Jerry Leichter
On Sep 10, 2013, at 12:43 PM, Nemo n...@self-evident.org wrote:
 GET / HTTP/1.1\r\n is exactly 16 bytes, or one AES block. If the IV is
 sent in the clear -- which it is -- that is one plaintext-ciphertext
 pair right there for every HTTPS connection.
Phil Rogoway has a paper somewhere discussing the right way to implement 
cryptographic modes and API's.  In particular, he recommends changing the 
definition of CBC from:

E_0 = IV # Not transmitted
E_{i+1} = E(E_i XOR P_{i+1})

to

E_0 = E(IV)  # Not transmitted
E_{i+1} = E(E_i XOR P_{i+1})

This eliminates known plaintext in the first block.  If you use this definition 
for chained CBC - where you use the final block of a segment as the IV for the 
next segment - it also eliminates the known attack (whose name escapes me - it 
was based on an attacker being able to insert a prefix to the next segment 
because he knows the IV it will use before it gets sent) that even caused 
problems for CBC modes in either SSH or TLS.

Beyond this, it changes the requirements on the IV as provided by the user from 
random to never repeated for any given key - an easier requirement that's 
much less likely to be accidentally violated.

The cost of this is one extra block encryption at each end of the link, per CBC 
segment - pretty trivial.  When I implemented a protocol that relied on CBC, I 
made this the exposed primitive.  When I later learned of the prefix attack, I 
was gratified to see that my code was already immune.

It actually amazes me that anyone uses the raw IV for the first XOR any more.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Points of compromise

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 1:53 PM, Phillip Hallam-Baker wrote:

 I was asked to provide a list of potential points of compromise by a 
 concerned party. I list the following so far as possible/likely:
It's not clear to me what kinds of compromises you're considering.  You've 
produced a list of a number of possibilities, but not even mentioned whole 
classes of them - e.g., back doors in ECC.

I've expanded, however, on one element of your list.
 2) Covert channel in Cryptographic accelerator hardware.
 
 It is possible that cryptographic accelerators have covert channels leaking 
 the private key through TLS (packet alignment, field ordering, timing, etc.) 
 or in key generation (kleptography of the RSA modulus a la Motti Young). 
There are two sides to a compromise in accelerator hardware:  Grabbing the 
information, and exfiltrating it.  The examples you give - and much discussion, 
because its fun to consider such stuff - look at clever ways to exfiltrate 
stolen information along with the data it refers to.

However, to a patient attacker with large resources, a different approach is 
easier:  Have the planted hardware gather up keys and exfiltrate them when it 
can.  The attacker builds up a large database of possible keys - many millions, 
even billions, of keys - but still even an exhaustive search against that 
database is many orders of magnitude easier than an exhaustive search on an 
entire keyspace, and quite plausible - consider Venona.  In addition, the 
database can be searched intelligently based on spatial/temporal/organizational 
closeness to the message being attacked.

An attack of this sort means you need local memory in the device - pretty cheap 
these days, though of course it depends on the device - and some way of 
exfiltrating that data later.  There are many ways one might do that, from the 
high tech (when asked to encrypt a message with a particular key, or bound to a 
particular target, instead encrypt - with some other key - and send - to some 
other target - the data to be exfiltrated) to low (pay someone with physical 
access to plug a USB stick into the device periodically).

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 11:41 PM, james hughes wrote:
 In summary, it would appear that the most viable solution is to make
 I don't see how it's possible to make any real progress within the existing 
 cloud model, so I'm with you 100% here.  (I've said the same earlier.)
 Could cloud computing be a red herring? Banks and phone companies all give up 
 personal information to governments (Verizon?) and have been doing this long 
 before and long after cloud computing was a fad
It's a matter of context.  For data I'm deliberately sharing with some company 
- sure, cloud is fine.  As I mentioned elsewhere, if the NSA wants to spend 
huge resources to break in to my purchasing transactions with Amazon, I may 
care as a citizen that they are wasting their money - but as a personal matter, 
it's not all that much of a big deal, as that information is already being 
gathered, aggregated, bought, and sold on a mass basis.  If they want to know 
about my buying habits and financial transactions, Axciom can sell them all 
they need for a couple of bucks.

On the other hand, I don't want them recording my chats or email or phone 
conversations.  It's *that* stuff that is out in the cloud these days, and as 
long as it remains out there in a form that someone other than I and those I'm 
communicating with can decrypt, it's subject to attacks - attacks so pervasive 
that I don't see how you could ever built a system (technical or legal) to 
protect against them.  The only way to win is not to play.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 8:37 PM, James A. Donald wrote:
 Your magic key must then take any block of N bits and magically
 produce the corresponding plaintext when any given ciphertext
 might correspond to many, many different plaintexts depending
 on the key
 Suppose that the mappings from 2^N plaintexts to 2^N ciphertexts are not 
 random, but rather orderly, so that given one element of the map, one can 
 predict all the other elements of the map.
 
 Suppose, for example the effect of encryption was to map a 128 bit block to a 
 group, map the key to the group, add the key to the block, and map back
Before our current level of understanding of block ciphers, people actually 
raised - and investigated - the question of whether the DES operations formed a 
group.  (You can do this computationally with reasonable resources.  The answer 
is that it isn't.)  I don't think anyone has repeated the particular experiment 
with the current crop of block ciphers; but then I expect the details of their 
construction, and the attacks they are already explicitly built to avoid, would 
rule out the possibility.  But I don't know.

Stepping back, what you are considering is the possibility that there's a 
structure in the block cipher such that if you have some internal information, 
and you have some collection of plaintext/ciphertext pairs with respect to a 
given key, you can predict other (perhaps all) such pairs.  This is just 
another way of saying there's a ciphertext/known plaintext/chosen plaintext/ 
chosen ciphertext attack, depending on your assumptions about how that 
collection of pairs must be created.  That it's conveniently expressible as 
some kind of mathematical structure on the mappings generated by the cipher for 
a given key is neither here nor there.

Such a thing would contradict everything we think we know about block ciphers. 
Sure, it *could* happen - but I'd put it way, way down the list of possibles.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 6:49 PM, Phillip Hallam-Baker wrote:
 ...The moral is that we have to find other market reasons to use security. 
 For example simplifying administration of endpoints. I do not argue like some 
 do that there is no market for security so we should give up, I argue that 
 there is little market for something that only provides security and so to 
 sell security we have to attach it to something they want
Quote from the chairman of a Fortune 50 company to a company I used to work 
for, made in the context of a talk to the top people at that company*:  I 
don't want to buy security products.  I want to buy secure products.

This really captures the situation in a nutshell.  And it's a conundrum for all 
the techies with cool security technologies they want to sell.  Security isn't 
a product; it's a feature.  If there is a place in the world for companies 
selling security solutions, it's as suppliers to those producing something that 
fills some other need - not as suppliers to end users.

-- Jerry

*It's obvious from public facts about me that the company receiving this word 
of wisdom was EMC; but I'll leave the other company anonymous.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Symmetric cipher + Backdoor = Public Key System

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 7:56 PM, Perry E. Metzger wrote:
 I'm not as yet seeing that a block cipher with a backdoor is a public 
 key system,
 
 Then read the Blaze  Feigenbaum paper I posted a link to. It makes a
 very good case for that, one that Jerry unaccountably does not seem to
 believe. Blaze seemed to still believe the result as of a few days ago.
I've given quite a bit of argument as to why the result doesn't really say what 
it seems to say.  Feel free to respond to the actual counterexamples I gave, 
rather than simply say I unaccountably don't believe the paper.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:06 PM, Christian Huitema wrote:

 Pairwise shared secrets are just about the only thing that scales worse than 
 public key distribution by way of PGP key fingerprints on business cards.   
 The equivalent of CAs in an all-symmetric world is KDCs  If we want 
 secure crypto that can be used by everyone, with minimal trust, public key 
 is the only way to do it.  
 
 I am certainly not going to advocate Internet-scale KDC. But what if the 
 application does not need to scale more than a network of friends?
Indeed, that was exactly what I had in mind when I suggested we might want to 
do without private key cryptography on another stream.

Not every problem needs to be solved on Internet scale.  In designing and 
building cryptographic systems simplicity of design, limitation to purpose, and 
humility are usually more important the universality.  Most of the email 
conversations I have are with people I've corresponded with in the past, or 
somehow related to people I've corresponded with in the past.  In the first 
case, I already have their keys - the only really meaningful notion of the 
right key is key continuity (combined with implied verification if we also 
have other channels of communication - if someone manages to slip me a bogus 
key for someone who I talk to every day, I'm going to figure that out very 
quickly.)  In the second case - e.g., an email address from a From field in a 
message on this list - the best I can possibly hope for initially is that I can 
be certain I'm corresponding with whoever sent that message to the list.  
There's no way I can bind that to a particular person in the real world wit
 hout something more.

Universal schemes, when (not if - there's no a single widely fielded system 
that hasn't been found to have serious bugs over its operation lifetime, and I 
don't expect to see one in *my* lifetime) they fail, lead to universal attacks. 
 I need some kind of universal scheme for setting up secure connections to buy 
something from a vendor I never used before, but frankly the NSA doesn't need 
to break into anything to get that information - the vendor, my bank, my CC 
company, credit agencies are call collecting and selling it anyway.

The other thing to keep in mind - and I've come back to this point repeatedly - 
is that the world we are now designing for is very different from the world of 
the mid- to late-1990's when the current schemes were designed.  Disk is so 
large and so cheap that any constraint in the old designs that was based on a 
statement like doing this would require the user to keep n^2 keys pairs, which 
is too much just doesn't make any sense any more - certainly not for 
individuals, not even for small organizations:  If n is determined by the 
number of correspondents you have, then squaring it still gives you a small 
number relative to current disk sizes.  Beyond that, everyone today (or in the 
near future) can be assumed to carry with them computing power that rivals or 
exceeds the fastest machines available back in the day - and to have an 
always-on network connection whose speed rivals that of *backbone* links back 
then.

Yes, there are real issues about how much you can trust that computer you carry 
around with you - but after the recent revelations, is the situation all that 
different for the servers you talk to, the routers in the network between you, 
the crypto accelerators many of the services use - hell, every piece of 
hardware and software.  For most people, that will always be the situation:  
They will not be in a position to check their hardware, much less build their 
own stuff from the ground up.  In this situation, about all you can do is try 
to present attackers with as many *different* targets as possible, so that they 
need to split their efforts.  It's guerrilla warfare instead of a massed army.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:45 PM, John Kelsey wrote:

 Let's suppose I design a block cipher such that, with a randomly generated 
 key and 10,000 known plaintexts, I can recover that key At this point, 
 what I have is a trapdoor one-way function.  You generate a random key K and 
 then compute E(K,i) for i = 1 to 1.  The output of the one-way function 
 is the ciphertext.  The input is K.  If nobody can break the cipher, then 
 this is a one-way funciton.  If only I, who designed it, can break it, then 
 it's a trapdoor one-way function At this point, I have a perfectly fine 
 public key encryption system.  To send me a message, choose a random K, use 
 it to encrypt 1 through 1, and then send me the actual message encrypted 
 after that in K.  If nobody but me can break the system, then this cipher 
 works as my public key.
OK, let's look at this another way.  The broader argument being made here 
breaks down into three propositions:

1.  If you have a way to spike a block cipher based on embedding a secret in 
it, you can a way to create something with the formal properties of a public 
key cryptosystem - i.e., there is a function E(P) which anyone can compute on 
any plaintext P, but given E(P), only you can invert to recover P.

2.  Something with the formal properties of a public key cryptosystem can be 
used as a *practical* public key cryptosystem.

3.  A practical public-key cryptosystem is much more valuable than a way to 
embed a secret in a block cipher, so if anyone came up with the latter, they 
would certainly use it to create the former, as it's been the holy grail of 
cryptography for many years to come up with a public key system that didn't 
depend on complex mathematics with uncertain properties.

If we assume these three propositions, and looks around us and observe the lack 
of the appropriate kinds of public key systems, we can certainly conclude that 
no one knows how to embed a secret in a block cipher.

Proposition 1, which is all you specifically address, is certainly true.  I 
claim that Propositions 2 and 3 are clearly false.

In fact, Proposition 3 isn't even vaguely mathematical - it's some kind of 
statement about the values that cryptographers assign to different kinds of 
primitives and to publication.  It's quite true that if anyone in the academic 
world were to come up with a way to create a practical public key cryptosystem 
without a dependence on factoring or DLP, they would publish to much acclaim.  
(Of course, there *are* a couple of such systems known - they were published 
years ago - but no one uses them for various reasons.  So acclaim ... well, 
maybe.)  Then again, an academic cryptographer who discovered a way to hide a 
secret in a block cipher would certainly publish - it would be really 
significant work.  So we never needed this whole chain of propositions to begin 
with:  It's self-evidently true that no one in the public community knows how 
to embed a secret in a block cipher.

But ... since we're talking *values*, what are NSA's values?  Would *they* have 
any reason to publish if they found a way to embed a secret in a block cipher? 
Hell, no!  Why would they want to give away such valuable knowledge?  Would 
they produce a private-key system based on their breakthrough?  Maybe, for 
internal use.  How would we ever know?

But let's talk mathematics, not psychology and politics.  You've given a 
description of a kind of back door that *would* produce a practical public key 
system.  But I've elsewhere pointed out that there are all kinds of back doors. 
 Suppose that my back door reduces the effective key size of AES to 40 bits.  
Even 20+ years ago, NSA was willing to export 40-bit crypto; presumably they 
were willing to do the brute-force computation to break it.  Today, it would be 
a piece of cake.  But would a public-key system that requires around 2^40 
operations to encrypt be *practical*?  Even today, I doubt it.  And if you're 
willing to do 2^40 operations, are you willing to do 2^56?  With specialized 
hardware, that, too, has been easy for years.  NSA can certainly have that 
specialized hardware for code breaking - will you buy it for encryption?

 The assumption that matters here is that you know enough cryptanalysis that 
 it would be hard to hide a practical attack from you.  If you don't know 
 about differential cryptanalysis, I can do the master key cryptosystem, but 
 only until you learn about it, at which point you will break my cipher.
In fact, this is an example I was going to give:  In a world in which 
differential crypto isn't known, it *is* a secret that's a back door.  Before 
DC was published, people seriously proposed strengthening DES by using a 
448-bit (I think that's the number) key - just toss the round key computation 
mechanism and provide all the keying for all the rounds.  If that had been 
widely used, NSA would have been able to break it use DC.

Of course we know about DC.  But the only 

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 10:45 AM, Ray Dillinger wrote:
 Pairwise shared secrets are just about the only thing that scales
 worse than public key distribution by way of PGP key fingerprints on
 business cards.  
 If we want secure crypto that can be used by everyone, with minimal
 trust, public key is the only way to do it.
 
 One pretty sensible thing to do is to remember keys established in
 previous sessions, and use those combined with the next session.
 
 You've answered your own conundrum!
 
 Of course the idea of remembering keys established in previous
 sessions and using them combined with keys negotiated in the next
 session is a scalable way of establishing and updating pairwise
 shared secrets
It's even better than you make out.  If Eve does manage to get hold of the 
Alice's current keys, and uses them to communicate with Bob, *after the 
communication, Bob will have updated his keys - but Alice will not have*.  The 
next time they communicate, they'll know they've been compromised.  That is, 
this is tamper-evident cryptography.

There was a proposal out there based on something very much like this to create 
tamper-evident signatures.  I forget the details - it was a couple of years ago 
- but the idea was that every time you sign something, you modify your key in 
some random way, resulting in signatures that are still verifiably yours, but 
also contain the new random modification.  Beyond that, I don't recall how it 
worked - it was quite clever... ah, here it is:  
http://eprint.iacr.org/2005/147.pdf
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 6:09 PM, Perry E. Metzger wrote:
 Not very surprising given everything else, but I thought I would
 forward the link. It more or less contends that the NSA has exploits
 for all major smartphones, which should not be surprising

 http://www.spiegel.de/international/world/privacy-scandal-nsa-can-spy-on-smart-phone-data-a-920971.html
A remarkably poor article.  Just what does gain access to mean?  There are 
boxes sold to law enforcement (but never, of course, to the bad guys) that 
claim they can get access to any phone out there.  If it's unlocked, everything 
is there for the taking; if it's locked, *some* of it is hard to get to, but 
most isn't.  Same goes for Android.

The article mentions that if they can get access to a machine the iPhone syncs 
with, they can get into the iPhone.  Well golly gee.  There was an attack 
reported just in the last couple of weeks in which someone built an attack into 
a fake charger!  Grab a charge at a public charger, get infected for  your 
trouble.  Apple's fixed that in the next release by prompting the user for 
permission whenever an unfamiliar device asks for connection.  But if you're in 
the machine the user normally connects to, that won't help.  Nothing, really, 
will help.

Really, for the common phones out there, the NSA could easily learn how to do 
this stuff with a quick Google search - and maybe paying a couple of thousand 
bucks to some of the companies that do it for a living.

The article then goes on to say the NSA can get SMS texts.  No kidding - so can 
the local cops.  It's all unencrypted, and the Telco's are only too happy to 
cooperate with govmint' agencies.

The only real news in the whole business is that they claim to have gotten into 
Blackberry's mail system.  It's implied that they bought an employee with the 
access needed to weaken things for them.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 7:16 PM, james hughes wrote:
 Let me suggest the following. 
 
 With RSA, a single quiet donation by the site and it's done. The situation 
 becomes totally passive and there is no possibility knowing what has been
 read.  The system administrator could even do this without the executives 
 knowing.
An additional helper:  Re-keying.  Suppose you send out a new public key, 
signed with your old one, once a week.  Keep the chain of replacements posted 
publicly so that someone who hasn't connected to you in a while can confirm the 
entire sequence from the last public key he knew to the current one.  If 
someone sends you a message with an invalid key (whether it was ever actually 
valid or not - it makes no difference), you just send them an update.

An attacker *could* sent out a fake update with your signature, but that would 
be detected almost immediately.  So a one-time donation is now good for a 
week.  Sure, the leaker can keep leaking - but the cost is now considerably
greater, and ongoing.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Jerry Leichter
Apparently this was just a teaser article.  The following is apparently the 
full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't tell for 
sure - it's the German original, and my German is non-existent.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 9:15 PM, Perry E. Metzger wrote:
 I don't see the big worry about how hard it is to generate random 
 numbers unless:
 
 Lenstra, Heninger and others have both shown mass breaks of keys based
 on random number generator flaws in the field. Random number
 generators have been the source of a huge number of breaks over time.
 
 Perhaps you don't see the big worry, but real world experience says
 it is something everyone else should worry about anyway.
Which brings into the light the question:  Just *why* have so many random 
number generators proved to be so weak.  If we knew the past trouble spots, we 
could try to avoid them, or at least pay special care to them during reviews, 
in the future.

I'm going entirely off of memory here and a better, more data-driven approach, 
might be worth doing, but I can think of three broad classes of root causes of 
past breaks:

1.  The designers just plain didn't understand the problem and used some 
obvious - and, in retrospect, obviously wrong - technique.  (For example, they 
didn't understand the concept of entropy and simply fed a low-entropy source 
into a whitener of some kind - often MD5 or SHA-1.  The result can *look* 
impressively random, but is cryptographically worthless.)

2.  The entropy available from the sources used was much less, at least in some 
circumstances (e.g., at startup) than the designers assumed.

3.  The code used in good random sources can look strange to programmers not 
familiar with it, and may even look buggy.  Sometimes good generators get 
ruined by later programmers who clean up the code.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] XORing plaintext with ciphertext

2013-09-07 Thread Jerry Leichter
On Sep 7, 2013, at 4:13 AM, Jon Callas wrote:
 Take the plaintext and the ciphertext, and XOR them together.  Does the 
 result reveal anything about the key or the painttext?
 
 It better not. That would be a break of amazing simplicity that transcends 
 broken. 
The question is much more subtle than that, getting deep into how to define a 
the security of a cipher.

Consider a very simplified and limited, but standard, way you'd want to state a 
security result:  A Turing machine with an oracle for computing the encryption 
of any input with any key, when given as input the cyphertext and allowed to 
run for time T polynomial in the size of the key, has no more than an 
probability P less than (something depending on the key size) of guessing any 
given bit of the plaintext.  (OK, I fudged on how you want to state the 
probability - writing this stuff in English rather than mathematical symbols 
rapidly becomes unworkable.)  The fundamental piece of that statement is in 
given as input... part:  If the input contains the key itself, then obviously 
the machine has no problem at all producing the plaintext!  Similarly, of 
course, if the input contains the plaintext, the machine has an even easier 
time of it.

You can, and people long ago did, strengthen the requirements.  They allow for 
probabilistic machines as an obvious first step.  Beyond that, you want 
semantic security:  Not only shouldn't the attacking machine be unable to get 
an advantage on any particular bit of plaintext; it shouldn't be able to get an 
advantage on, say, the XOR of the first two bits.  Ultimately, you want so say 
that given any boolean function F, the machine's a postiori probability of 
guessing F(cleartext) should be identical (within some bounds) to its a priori 
probability of guessing F(cleartext).  Since it's hard to get a handle on the 
prior probability, another way to say pretty much the same thing is that the 
probability of a correct guess for F(cleartext) is the same whether the machine 
is given the ciphertext, or a random sequence of bits.  If you push this a bit 
further, you get definitions related to indistinguishability:  The machine is 
simply expected to say the input is the result of apply
 ing the cipher to some plaintext or the input is random; it shouldn't even 
be able to get an advantage on *that* simple question.

This sounds like a very strong security property (and it is) - but it says 
*nothing at all* about the OP's question!  It can't, because the machine *can't 
compute the XOR of the plaintext and the ciphertext*.  If we *give* it that 
information ... we've just given it the plaintext!

I can't, in fact, think of any way to model the OP's question.  The closest I 
can come is:  If E(K,P) defines a strong cipher (with respect to any of the 
variety of definitions out there), does E'(K,P) = E(K,P) XOR P *also* define a 
strong cipher?  One would think the answer is yes, just on general principles: 
To someone who doesn't know K and P, E(K,P) is indistinguishable from random 
noise, so E'(K,P) should be the same.  And yet there remains the problem that 
it's not a value that can be computed without knowing P, so it doesn't fit into 
the usual definitional/proof frameworks.  Can anyone point to a proof?

The reason I'm not willing to write this off as obvious is an actual failure 
in a very different circumstance.  There was work done at DEC SRC many years 
ago on a system that used a fingerprint function to uniquely identify modules.  
The fingerprints were long enough to avoid the birthday paradox, and were 
computed based on the result of a long series of coin tosses whose results were 
baked into the code.  There was a proof that the fingerprint looked random.  
And yet, fairly soon after the system went into production, collisions started 
to appear.  They were eventually tracked down to a merge fingerprints 
operation, which took the fingerprints of two modules and produces a 
fingerprint of the pair by some simple technique like concatenating the inputs 
and fingerprinting that.  Unfortunately, that operation *violated the 
assumptions of the theorem*.  The theorem said that the outputs of the 
fingerprint operation would look random *if chosen without knowledge of the 
coi
 n tosses*.  But the inputs were outputs of the same algorithm, hence had 
knowledge of the coin tosses.  (And ... I just found the reference to this.  
See ftp://ftp.dec.com/pub/dec/SRC/research-reports/SRC-113.pdf, documentation 
of the Fingerprint interface, page 42.)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-07 Thread Jerry Leichter
On Sep 7, 2013, at 12:31 AM, Jon Callas wrote:
 I'm sorry, but this is just nonsense.  You're starting with informal, rough 
 definitions and claiming a mathematical theorem.
 
 Actually, I'm doing the opposite. I'm starting with a theorem and arguing 
 informally from there
Actually, if you look at the papers cited, *they* are themselves informal.  The 
fundamental thing they are lacking is a definition of what would constitute a 
master key.  Let's see if we can formalize this a bit:

We're given a block cipher E(K,P) and a corresponding decryption algorithm 
D(K,C).  The system has a master key M such that D(E(K,P),M) == P.  This is 
what a master key does in a traditional lock-and-key system, so unless we see 
some other definition, it's what we have to start with.  Is there such a 
system?  Sure, trivially.  Given any block cipher E'/D', I simply define E(K,P) 
= E'(M,K) || E'(K,P).  (I can optimize the extra length by leaking one randomly 
chosen bit of E'(M,K) per block.  It won't take long for the whole key to be 
transmitted.)  OK, where's the public key system?

So maybe there isn't *one* master key, but let's go to the extreme and say 
there is one unique master per user key, based on some secret information S.  
That is:  Given K, there is a function F(S,K) which produces a *different* key 
K', with the property that D(K,C) == D(K',C).  Or maybe, as in public key 
systems, you start with S and some random bits and produce a matched pair K and 
K'  But how is this a master key system?  If I wasn't present at the birth 
of the K that produced the cyphertext I have in hand ... to get K' now, I need 
K (first form) or S and the random bits (second form), which also gives me K 
directly.  So what have I gained?

I can construct a system of the first form trivially:  Just use an n-bit key 
but ignore the first bit completely.  There are now two keys, one with a 
leading 0, one with a leading 1.  Constructing a system of the second form 
shouldn't be hard, though I haven't done it.  In either case, it's 
uninteresting - my master key is as hard to get at as the original key.

I'm not sure exactly where to go next.  Let's try to modify some constraints.  
Eliminate directly hiding the key in the output by requiring that E(K,.) be a 
bijection.  There can't possibly be a single master key M, since if there were, 
what could D(M,E(M,0...0)) be?  It must be E(K,0...0) for any possible K, so 
E(K,0...0) must be constant - and in fact E must be constant.  Not very 
interesting.  In fact, a counting argument shows that there must be as many M's 
as there are K's.  It looks as we're back to the two-fold mapping on keys 
situation.  But as before ... how could this work?

In fact, it *could* work.  Suppose I use a modified form of E() which ignores 
all but the first 40 bits of K - but I don't know that E is doing this.  I can 
use any (say, 128-bit) key I like, and to someone not in on the secret, a brute 
force attack is impossible.  But someone who knows the secret simply sets all 
but the first 40 bits to 0 and has an easy attack.

*Modified forms (which hid what was happening to some degree) of such things 
were actually done in the days of export controls!*  IBM patented and sold such 
a thing under the name CDMF 
(http://domino.research.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/a453914c765e690085256bfa0067f9f4!OpenDocument).
  I worked on adding cryptography to a product back in those days, and we had 
to come up with a way to be able to export our stuff.  I talked to IBM about 
licensing CDMF, but they wanted an absurd amount of money.  (What you were 
actually paying for wasn't the algorithm so much as that NSA had already 
approved products using it for export.)  We didn't want to pay, and I designed 
my own algorithm to do the same thing.  It was a silly problem to have to 
solve, but I was rather proud of the solution - I could probably find my spec 
if anyone cares.  It was also never implemented, first because this was right 
around the time the crypto export controls got loosened; a
 nd second because we ended up deciding we didn't need crypto anyway.  We came 
back and did it very differently much later.  My two fun memories from the 
experience:  (a) Receiving a FAX from NSA - I still have it somewhere; (b) 
being told at one point that we might need someone with crypto clearance to 
work on this stuff with NSA, and one of my co-workers chiming in with Well, I 
used to have it.  Unfortunately it was from the KGB.

Anyway ... yes, I can implement such a thing - but there's still no public key 
system here.

So ... would *you* like to take a stab at pinning down a definition relative to 
which the theorem you rely on makes sense?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] New task for the NSA

2013-09-07 Thread Jerry Leichter
The NY Times has done a couple of reports over the last couple of months about 
the incomprehensibility of hospital bills, even to those within the industry - 
and the refusal of hospitals to discuss their charge rates, claiming that what 
they will bill you for a treatment is proprietary.

Clearly, it's time to sic the NSA on the medical care system's billing offices. 
 Let them do something really useful for a change!

-- Jerry :-)

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Can you backdoor a symmetric cipher (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-06 Thread Jerry Leichter
 
 It is probably very difficult, possibly impossible in practice, to
 backdoor a symmetric cipher. For evidence, I direct you to this old
 paper by Blaze, Feigenbaum and Leighton:
 
 http://www.crypto.com/papers/mkcs.pdf
 
 
 There is also a theorem somewhere (I am forgetting where) that says that if 
 you have a block cipher with a back door, then it is also a public key 
 cipher  The real question there is whether someone who had such a thing 
 would want to be remembered by history as the inventor of the most 
 significant PK system the world has ever seen, or a backdoored cipher.
Well, if that someone were the NSA, they might well be very happy to be 
remembered as the guy who made it possible break in to much of the world's 
communication!

However, I find this argument unconvincing.  It has the feel of a standard 
reduction, but isn't, because the reduction is to a known problem that's widely 
thought to be difficult - it's to a vaguely defined, broad class of problems, 
designing a much faster public key system.  Just because the construction 
gives you something like a public key system, doesn't mean if gives you 
anything practical for use that way.  For example, suppose the master key works 
- but only for 1/2^k possible cleartext blocks, or even 1/2^k possible keys.  
This makes it useless as a public-key system even for very small k, but quite 
useful to an attacker even for larger k.

Or suppose the master key works on all messages and keys, but is inefficient, 
requiring large amounts of time or memory, or the pre-computation of large 
lookup tables.  Here's a thought experiment:  Suppose differential 
cryptanalysis were known only to you, and you were in a position to influence 
the choice of S-boxes for a cryptosystem.  Your master key would be the 
knowledge that DC would work very effectively against the system - but it would 
still require a lot of time and memory to do so.  You'd even have a leg up on 
everyone else because you'd know the differentials to use up front - though 
that's a trivial advantage, since once you know to look for them, finding them 
is easy enough.  In fact, as far as I know, no one has had any reason to pose 
questions like:  Could you choose S-boxes that allow some kind of 
pre-computation step to make application of DC to messages much faster?  DC 
requires gathering up a large number of plaintext/cyphertext pairs; eventually, 
you f
 ind examples of you can use.  But could there be some way of producing pairs 
so that success is more likely to come earlier?  Could it be that there's some 
pre-secret that allows computation both of those tables and and of the S-boxes, 
such that given only the S-boxes, finding the pre-secret and the tables isn't 
practical?  Once DC was known and it was found the NSA had actually 
strengthened DES's S-boxes to protect against it, no one really looked into 
such issues - basically, who cared?

If you really want to think in tin-hat terms, consider linear cryptanalysis.  
While DES was strong against DC, it was quite weak against LC.  People 
interpreted this as meaning that NSA didn't know about LC.  But ... maybe out 
in Area 51, or wherever Langley does work with alien technologies, they knew 
about LC *all along*, and *deliberately* made DES weak against it!  (Why LC and 
not DC?  Because as we subsequently learned from Don Coppersmith, DC was 
already known in the non-NSA community, and the NSA knew that it was known.)

Please keep in mind that I'm not proposing that NSA actually did anything like 
this for any widely-used cryptosystem.  I don't even see how they could have 
with respect to, say, AES, since they had no hand in the design of the 
algorithm or the choice of constants.  The only thing I'm arguing here is that 
the theorems that say such a thing could never happen prove nothing of the 
sort.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Jerry Leichter
 Perhaps it's time to move away from public-key entirely!  We have a classic 
 paper - Needham and Schroeder, maybe? - showing that private key can do 
 anything public key can; it's just more complicated and less efficient.
 
 Not really. The Needham-Schroeder you're thinking of is the essence of 
 Kerberos, and while Kerberos is a very nice thing, it's hardly a replacement 
 for public key.
 
 If you use a Needham-Schroeder/Kerberos style system with symmetric key 
 systems, you end up with all of the trust problems, but on steroids
I don't think we're really in disagreement here.  Much of what you say later in 
the message is that the way we are using symmetric-key systems (CA's and such), 
and the way browsers work, are fundamentally wrong, and need to be changed.  
And that's really the point:  The system we have is all of a piece, and 
incremental changes, sadly, can only go so far.  We need to re-think things 
from the ground up.  And I'll stand by my contention that we need to re-examine 
things we think we know, based on analyses done 30 years ago.  Good theorems 
are forever, but design choices apply those theorems to real-world 
circumstances.  So much has changed, both on the technical front and on 
non-technical fronts, that the basis for those design choices has fundamentally 
changed.

Getting major changes fielded in the Internet is extremely difficult - see 
IPv6.  If it can be done at all, it will take years.  But the alternative of 
continuing on the path we're on seems less desirable every day.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Jerry Leichter
On Sep 6, 2013, at 7:28 AM, Jerry Leichter wrote:
 ...Much of what you say later in the message is that the way we are using 
 symmetric-key systems (CA's and such)...
Argh!  And this is why I dislike using symmetric and asymmetric to describe 
cryptosystems:  In English, the distinction is way too brittle.  Just a 
one-letter difference - and in including or not the letter physically right 
next to the s.
-- Jerry :-)

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Aside on random numbers (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-06 Thread Jerry Leichter
On Sep 6, 2013, at 10:03 AM, Perry E. Metzger wrote:
 
 Naively, one could take a picture of the dice and OCR it. However,
 one doesn't actually need to OCR the dice -- simply hashing the
 pixels from the image will have at least as much entropy if the
 position of the dice is recognizable from the image
 
 One could write an  app to do this, but of course the phone is
 not exactly a secure platform to begin with...
Ah, but that highlights an essential difference between OCR'ing the image and 
just hashing it:  I can easily check, with my own eyes, that the OCR app is 
really doing what it claims to be doing.  I have no hope of checking the 
hash-based app.  A whole class of attacks is closed off by the OCR technique.

It's not that there aren't other attacks.  The phone could, for example, leak 
the generated values, sending them off to Big Brother.  That kind of attack 
would, if done correctly, be virtually impossible to detect.  On the other 
hand, it's not nearly as valuable as a biased generation attack - Big Brother 
would receive streams of random die tosses with little context about what the 
resulting values would be used for or how they would be used.  Appropriately 
targeted attacks might work - I know Metzger regenerates his keys on the 3rd 
of every month at about 8:00 AM, so let's use the values he scans at around 
that time as guesses for his base random values - but we're talking quite a 
bit of difficulty here - and the more people use the app, and the more often 
they make it a habit to toss and scan dice and just discard the results, the 
more difficult it becomes.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sabotaged hardware (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-06 Thread Jerry Leichter
On Sep 6, 2013, at 11:37 AM, John Ioannidis wrote:
 I'm a lot more worried about FDE (full disk encryption) features on modern 
 disk drives, for all the obvious reasons.
 
If you're talking about the FDE features built into disk drives - I don't know 
anyone who seriously trusts it.  Every secure disk that's been analyzed has 
been found to be secured with amateur-level crypto.  I seem to recall one 
that advertised itself as using AES (you know, military-grade encryption) which 
did something like:  Encrypt the key with AES, then XOR with the result to 
encrypt all the data.  Yes, it does indeed use AES

There's very little to be gained, and a huge amount to be lost, be leaving the 
crypto to the drive, and whatever proprietary, hacked-up code the bit-twiddlers 
who do driver firmware decide to toss in to meet the marketing requirement of 
being able to say they are secure.  Maybe when they rely on a published 
standard, *and* provide a test mode so I can check to see that what they wrote 
to the surface is what the standard says should be there, I might change my 
mind.  At least them, I'd be worrying about deliberate attacks (which, if you 
can get into the supply chain are trivial - there's tons of space to hide away 
a copy of the key), rather than the nonsense we have today.

 And if I wanted to be truly paranoid, I'd worry about HSMs to
 
Now, wouldn't compromising HSM's be sweet.  Not that many vendors make HSM's, 
and they are exactly the guys who already have a close relationship with the CI 
(crypto-industrial) complex
-- Jerry


 /ji

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-06 Thread Jerry Leichter
A response he wrote as part of a discussion at 
http://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html:

Q: Could the NSA be intercepting downloads of open-source encryption software 
and silently replacing these with their own versions?

A: (Schneier) Yes, I believe so.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Jerry Leichter
Following up on my own posting:
 [The NSA] want to buy COTS because it's much cheap, and COTS is based on 
 standards.  So they have two contradictory constraints:  They want the stuff 
 they buy secure, but they want to be able to break in to exactly the same 
 stuff when anyone else buys it.  [Y]ou have to explain how the goal in NSA's 
 budget [of influencing the commercial crypto community to move in directions 
 NSA can attack] could be carried out in a way consistent with the two 
 constraints.
So here's a thought experiment for a particular approach:  Imagine that it's 
the case that half of all possible AES keys are actually pseudo-weak, in the 
sense that if you use one of them, some NSA cryptanalytic technique can recover 
the rest of your key with acceptable (to NSA) effort.  Their attack fails for 
the other half of all possible keys.  Further, imagine that NSA has a 
recognizer for pseudo-weak keys.  Then their next step is simple:  Get the 
crypto industry to use AES with good, randomizing key generation techniques.  
Make sure that there is more than one approved key generation technique, 
ideally even a way for new techniques to be added in later versions of the 
standard, so that approved implementations have to allow for a choice, leading 
them to separate key generation from key usage.  For the stuff *they* use, add 
another choice, which starts with one of the others and simply rejects 
pseudo-weak keys (or modifies them in some way to produce strong keys.)  T
 hen:

- Half of all messages the world sends are open to attack by NSA until the COTS 
producers learn of the attack and modify their fielded systems;
- All messages NSA is responsible for are secure, even if the attack becomes 
known to other cryptanalytic services.

I would think NSA would be very happy with such a state of affairs.  (If they 
could arrange it that 255/256 keys are pseudo-weak - well, so much the better.)

Is such an attack against AES *plausible*?  I'd have to say no.  But if you 
were on the stand as an expert witness and were asked under cross-examination 
Is this *possible*?, I contend the only answer you could give is I suppose 
so (with tone and body language trying to signal to the jury that you're being 
forced to give an answer that's true but you don't in your gut believe it).

Could an encryption algorithm be explicitly designed to have properties like 
this?  I don't know of any, but it seems possible.  I've long suspected that 
NSA might want this kind of property for some of its own systems:  In some 
cases, it completely controls key generation and distribution, so can make sure 
the system as fielded only uses good keys.  If the algorithm leaks without 
the key generation tricks leaking, it's not just useless to whoever grabs onto 
it - it's positively hazardous.  The gun that always blows up when the bad guy 
tries to shoot it
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Jerry Leichter
On Sep 6, 2013, at 8:58 PM, Jon Callas wrote:
  I've long suspected that NSA might want this kind of property for some of 
 its own systems:  In some cases, it completely controls key generation and 
 distribution, so can make sure the system as fielded only uses good keys.  
 If the algorithm leaks without the key generation tricks leaking, it's not 
 just useless to whoever grabs onto it - it's positively hazardous.  The gun 
 that always blows up when the bad guy tries to shoot it
 
 We know as a mathematical theorem that a block cipher with a back door *is* a 
 public-key system.
I'm sorry, but this is just nonsense.  You're starting with informal, rough 
definitions and claiming a mathematical theorem.

 It is a very, very, very valuable thing, and suggests other mathematical 
 secrets about hitherto unknown ways to make fast, secure public key systems. 
I said all this before.  A back door doesn't have to be fast.  It doesn't have 
to be implementable using amounts of memory that are practical for a fielded 
system.  It may require all kinds of expensive pre-computation to be useful at 
all.  It just has to allow practical attacks.  A back door that reduced the 
effective key size of AES to 40 bits would amount to an effective break of AES, 
but would be a public key system only in some very technical and 
uninteresting sense.

And none of this is relevant to whether one could have a system with many weak 
keys.  Some kind of structure in the round computation structure would be an 
obvious place to look.

In fact, now that I think of it, here's a rough example of such a system:  Take 
any secure round-based block cipher and decide that you're not going to use a 
round computation at all - you'll let the user specify the full expanded 
per-round key.  (People proposed doing this with DES as a way of getting beyond 
the 56-bit key size.  It didn't work - DES is inherently good for no more than 
56 bits, more or less.  In general doing this with any decent block cipher 
won't make it any stronger.  But that's not the point of the example.)  There 
are now many weak keys - all kinds of repetitive structures allow for slide 
attacks, for example.  Every bad way of designing a round computation 
corresponds to a set of weak full keys.  On the other hand, for a certain 
subset of the keys - those that could have been produced by the original (good) 
round computation - it's *exactly* the original cipher, with *exactly* the 
original security guarantees.  If I carefully use only keys from that se
 t, I've lost nothing (other than wasted space for a key longer than it needs 
to be).

So now I have a block cipher that has two sets of keys.  One set makes it as 
secure as the original cipher; one set makes it easy to break - my back door.  
Have I just invented a new public key system?
-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Jerry Leichter
[This drifts from the thread topic; feel free to attach a different subject 
line to it]

On Sep 5, 2013, at 4:41 PM, Perry E. Metzger wrote:
 3) I would not be surprised if random number generator problems in a
 variety of equipment and software were not a very obvious target,
 whether those problems were intentionally added or not.
Random number generators make for a very interesting target.  Getting decent 
amounts of entropy on conventional machines is very difficult.  Servers have 
almost no random variation in their environments; desktops somewhat more; 
modern laptops, yet more.  Virtualization - now extremely common on the server 
side - makes things even harder.  But even laptops don't have much.  So we're 
left trying to distill enough randomness for security - a process that's 
error-prone and difficult to check.

So ... along comes Intel with a nice offer:  Built-in randomness on their 
latest chips.  Directly accessible to virtual machines, solving the very 
difficult problems they pose.  The techniques used to generate that randomness 
are published.  But ... how could anyone outside a few chip designers at Intel 
possibly check that the algorithm wasn't, in some way, spiked?  For that 
matter, how could anyone really even check that the outputs of the hardware Get 
Random Value instruction were really generated by the published algorithm?

Randomness is particularly tricky because there's really no way to test for a 
spiked random number generator (unless it's badly spiked, of course).  Hell, 
every encryption algorithm is judged by its ability to generate streams of bits 
that are indistinguishable from random bits (unless you know the key).

Now, absolutely, this is speculation.  I know of no reason to believe that the 
NSA, or anyone else, has influenced the way Intel generates randomness; or that 
there is anything at all wrong with Intel's implementation.  But if you're 
looking for places an organization like the NSA would really love to insert 
itself - well, it's hard to pick a better one.

Interestingly, though, there's good news here as well.  While it's hard to get 
at sources of entropy in things like servers, we're all carrying computers with 
excellent sources of entropy in our pockets.  Smartphones have access to a 
great deal of environmental data - accelerometers, one or two cameras, one or 
two microphones, GPS, WiFi, and cell signal information (metadata, data, signal 
strength) - more every day.  This provides a wealth of entropy, and it's hard 
to see how anyone could successfully bias more than a small fraction of it.  
Mix these together properly and you should be able to get extremely high 
quality random numbers.  Normally, we assume code on the server side is 
better and should take the major role in such tasks as providing randomness.  
Given what we know now about the ability of certain agencies to influence what 
runs on servers, *in general*, we need to move trust away from them.  The case 
is particularly strong in the case of randomness.

Of course, there's a whole other layer of issue introduced by the heavily 
managed nature of phone software.
-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Jerry Leichter
On Sep 5, 2013, at 7:14 PM, John Kelsey wrote:
 My broader question is, how the hell did a sysadmin in Hawaii get hold of 
 something that had to be super secret?  He must have been stealing files from 
 some very high ranking people.  
This has bothered me from the beginning.  Even the first leaks involved 
material that you would expect to only be available to highly trusted people 
*well up in the organization* - they were slides selling capabilities to 
managers and unlikely to be shown to typical employees, cleared or not.  My 
immediate impression was that we were looking at some disgruntled higher-up.

The fact that these are coming from a sysadmin - who would never have reason to 
get legitimate access to pretty much *any* of the material leaked so far - is a 
confirmation of a complete breakdown of NSA's internal controls.  They seem to 
know how to do cryptography and cryptanalysis and all that stuff - but basic 
security and separation of privileges and internal monitoring ... that seems to 
be something they are just missing.

Manning got to see all kinds of material that wasn't directly related to his 
job because the operational stuff was *deliberately* opened up in an attempt to 
get better analysis.  While he obviously wasn't supposed to leak the stuff, he 
was authorized to look at it.  I doubt the same could be said of Snowden.  
Hell, when I had a data center manager working for me, we all understood that 
just because root access *let* you look at everyone's files, you were not 
*authorized* to do so without permission.

One of the things that must be keeping the NSA guys up night after night is:  
If Snowden could get away with this much without detection, who's to say what 
the Chinese or the Russians or who knows who else have managed to get?  Have 
they spiked the spikers, grabbing the best stuff the NSA manages to find?

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Jerry Leichter
The actual documents - some of which the Times published with few redactions - 
are worthy of a close look, as they contain information beyond what the 
reporters decided to put into the main story.  For example, at 
http://www.nytimes.com/interactive/2013/09/05/us/documents-reveal-nsa-campaign-against-encryption.html?ref=uspagewanted=all,
 the following goal appears for FY 2013 appears:  Complete enabling for 
[redacted] encryption chips used in Virtual Public Network and Web encryption 
devices.  The Times adds the following note:  Large Internet companies use 
dedicated hardware to scramble traffic before it is sent. In 2013, the agency 
planned to be able to decode traffic that was encoded by one of these two 
encryption chips, either by working with the manufacturers of the chips to 
insert back doors or by exploiting a security flaw in the chips' design.  It's 
never been clear whether these kinds of notes are just guesses by the 
reporters, come from their own sources, or com
 e from Snowden himself.  The Washington Post got burned on one they wrote.  
But in this case, it's hard to come up with an alternative explanation.

Another interesting goal:  Shape worldwide commercial cryptography marketplace 
to make it more tractable to advanced cryptanalytic capabilities being 
developed by NSA/CSS.  Elsewhere, enabling access and exploiting systems of 
interest and inserting vulnerabilities.  These are all side-channel attacks. 
 I see no other reference to cryptanalysis, so I would take this statement at 
face value:  NSA has techniques for doing cryptanalysis on certain 
algorithms/protocols out there, but not all, and they would like to steer 
public cryptography into whatever areas they have attacks against.  This makes 
any NSA recommendation *extremely* suspect.  As far as I can see, the bit push 
NSA is making these days is toward ECC with some particular curves.  Makes you 
wonder.  (I know for a fact that NSA has been interested in this area of 
mathematics for a *very* long time:  A mathematician I knew working in the area 
of algebraic curves (of which elliptic curves are an example) was re
 cruited by - and went to - NSA in about 1975.  I heard indirectly from him 
after he was at NSA, where he apparently joined an active community of people 
with related interests.  This is a decade before the first public suggestion 
that elliptic curves might be useful in cryptography.  (But maybe NSA was just 
doing a public service, advancing the mathematics of algebraic curves.)

NSA has two separate roles:  Protect American communications, and break into 
the communications of adversaries.  Just this one example shows that either (a) 
the latter part of the mission has come to dominate the former; or (b) the 
current definition of an adversary has become so broad as to include pretty 
much everyone.

Now, the NSA will say:  Only *we* can make use of these back doors.  But given 
the ease with which Snowden got access to so much information ... why should we 
believe they can keep such secrets?
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Jerry Leichter
On Sep 5, 2013, at 10:19 PM, Jon Callas wrote:
 I don't disagree by any means, but I've been through brittleness with both 
 discrete log and RSA, and it seems like only a month ago that people were 
 screeching to get off RSA over to ECC to avert the cryptocalypse. And that 
 the ostensible reason was that there are new discrete log attacks -- which 
 was just from Mars and I thought that that proved the people didn't know what 
 they were talking about. Oh, wait, it *was* only a month ago! Silly me.
 
 Crypto experts issue a call to arms to avert the cryptopocalypse
 
 http://arstechnica.com/security/2013/08/crytpo-experts-issue-a-call-to-arms-to-avert-the-cryptopocalypse/
 
 Discrete log has brittleness. RSA has brittleness. ECC is discrete log over a 
 finite field that's hard to understand. It all sucks.
Perhaps it's time to move away from public-key entirely!  We have a classic 
paper - Needham and Schroeder, maybe? - showing that private key can do 
anything public key can; it's just more complicated and less efficient.

Not only are the techniques brittle and increasingly under suspicion, but in
practice almost all of our public key crypto inherently relies on CA's - a 
structure that's just *full* of well-known problems and vulnerabilities.  
Public key *seems to* distribute the risk - you just get the other guy's 
public key and you can then communicate with him safely.  But in practice it 
*centralizes* risks:  In CA's, in single magic numbers that if revealed allow 
complete compromise for all connections to a host (and we now suspect they 
*are* being revealed.)

We need to re-think everything about how we do cryptography.  Many decisions 
were made based on hardware limitations of 20 and more years ago.  More 
efficient claims from the 1980's often mean nothing today.  Many decisions 
assumed trust models (like CA's) that we know are completely unrealistic.  
Mobile is very different from the server-to-server and dumb-client-to-server 
models that were all anyone thought about the time.  (Just look at SSL:  It has 
the inherent assumption that the server *must* be authenticated, but the client 
... well, that's optional and rarely done.)  None of the work then anticipated 
the kinds of attacks that are practical today.

I pointed out in another message that today, mobile endpoints potentially have 
access to excellent sources of randomness, while servers have great difficulty 
getting good random numbers.  This is the kind of fundamental change that needs 
to inform new designs.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Jerry Leichter
 Another interesting goal:  Shape worldwide commercial cryptography 
 marketplace to make it more tractable to advanced cryptanalytic capabilities 
 being developed by NSA/CSS. ... This makes any NSA recommendation 
 *extremely* suspect.  As far as I can see, the bit push NSA is making these 
 days is toward ECC with some particular curves.  Makes you wonder.
 Yes, but. The reason we are using those curves is because they want them for 
 products they buy. 
They want to buy COTS because it's much cheap, and COTS is based on standards.  
So they have two contradictory constraints:  They want the stuff they buy 
secure, but they want to be able to break in to exactly the same stuff when 
anyone else buys it.  The time-honored way to do that is to embed some secret 
in the design of the system.  NSA, knowing the secret, can break in; no one 
else can.  There have been claims in this direction since NSA changed the 
S-boxes in DES.  For DES, we now know that was to protect against differential 
cryptanalysis.  No one's ever shown a really convincing case of such an 
embedded secret hack being done ... but now if you claim it can't happen, you 
have to explain how the goal in NSA's budget could be carried out in a way 
consistent with the two constraints.  Damned if I know

 (I know for a fact that NSA has been interested in this area of mathematics 
 for a *very* long time:  A mathematician I knew working in the area of 
 algebraic curves (of which elliptic curves are an example) was recruited by 
 - and went to - NSA in about 1975
 I think it might even go deeper than that. ECC was invented in the civilian 
 world by Victor Miller and Neal Koblitz (independently) in 1985, so they've 
 been planning for breaking it even a decade before its invention. 
I'm not sure exactly what you're trying to say.  Yes, Miller and Koblitz are 
the inventors of publicly known ECC, and a number of people (Diffie, Hellman, 
Merkle, Rivest, Shamir, Adelman) are the inventors of publicly known public-key 
cryptography.  But in fact we now know that Ellis, Cocks, and Williamson at 
GCHQ anticipated their public key cryptography work by several years - but in 
secret.

I think the odds are extremely high that NSA was looking at cryptography based 
on algebraic curves well before Miller and Koblitz.  Exactly what they had 
developed, there's no way to know.  But of course if you want to do good 
cryptography, you also have to do cryptanalysis.  So, yes, it's quite possible 
that NSA was breaking ECC a decade before its (public) invention.  :-)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Hashes into Ciphers

2013-09-04 Thread Jerry Leichter
This first publication of differential cryptanalysis was at CRYPTO'90.  I 
highly doubt Karn analyzed his construction relative to DC.  (His post 
certainly makes no mention of it.)

At first glance - I certainly haven't worked this through - it should be 
straightforward to construct a hash will all kinds of desirable *hash* 
properties that would, in Karn's construction, produce a cipher highly 
vulnerable to DC.  (That is:  This is not a safe *generic* construction, and 
I'm not sure exactly what requirements you'd have to put on the hash in order 
to be sure you got a DC-resistant cipher.)
-- Jerry

On Sep 4, 2013, at 10:49 AM, Perry E. Metzger pe...@piermont.com wrote:

 On Wed, 4 Sep 2013 10:37:12 -0400 Perry E. Metzger
 pe...@piermont.com wrote:
 Phil Karn described a construction for turning any hash function
 into the core of a Feistel cipher in 1991. So far as I can tell,
 such ciphers are actually quite secure, though impractically slow.
 
 Pointers to his original sci.crypt posting would be appreciated, I
 wasn't able to find it with a quick search.
 
 Answering my own question
 
 https://groups.google.com/forum/#!original/sci.crypt/tTWR2qIII0s/iDvT3ptY5CEJ
 
 Note that Karn's construction need not use any particular hash
 function -- he's more or less simply describing how to use a hash
 function of any sort as the heart of a Feistel cipher.
 
 Perry
 -- 
 Perry E. Metzger  pe...@piermont.com
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] FIPS, NIST and ITAR questions

2013-09-04 Thread Jerry Leichter
On Sep 4, 2013, at 10:45 AM, Faré fah...@gmail.com wrote:
 Can't you trivially transform a hash into a PRNG, a PRNG into a
 cypher, and vice versa?
 No.
 
 
 Let H(X) = SHA-512(X) || SHA-512(X)
 where '||' is concatenation.  Assuming SHA-512 is a cryptographically secure 
 hash H trivially is as well.  (Nothing in the definition of a cryptographic 
 hash function says anything about minimality.)  But H(X) is clearly not 
 useful for producing a PRNG.
 
 Just because it's trivial to produce bogus crypto doesn't mean it's
 non-trivial to produce good crypto, given a few universal recipes.
Look, if you want to play around a produce things that look secure to you and a 
few of your buddies - feel free to go ahead.  If your system is only used by 
you and a few friends, it's unlikely anyone with the appropriate skills will 
ever care enough to attack your system, and you'll be secure.  As always, 
security is mainly an *economic* question, not a purely technical one.

But if you want to play in the crypto game as it's actually played today - if 
you want something that will survive even if you use it to protect information 
that has significant value to someone willing to make the investment to get it 
from you - well, you're going to have to up your game.  You're playing at 
1980's levels.  The world has moved on - your opponents won't feel constrained 
to do the same.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] A strategy to circumvent patents?

2013-09-03 Thread Jerry Leichter
On Sep 3, 2013, at 12:45 PM, Faré fah...@gmail.com wrote:

 Don't write the code. Write a reasonably general software solver that
 finds a program that fulfill given specifications, given a minimum
 number of hints. Then write a specification for the problem (e.g.
 finding a nice elliptic curve with interesting properties) and let the
 solver find them.
 
 You didn't explicitly write the solution. Now, the patent troll has to
 argue that the problem itself, not the solution, is covered by the
 patent — which I believe is not supported by patent law. Or he has to
 argue that the solution isn't obvious to someone versed in the arts,
 which it is, since a trivial automated program could find the
 solution.
Sigh.  Beware hackers practicing law.

A patent gives someone the right to exclude anyone else from making, using, or 
selling the patented invention.  You can certainly write a program such as you 
describe, since it is itself presumably not covered by the patent.  And most 
likely, even if it stumbled upon exactly the same elliptic curve as is covered 
by a patent, running the program would not be seen as making the invention. 
The inventions that are out there seem to be of two classes:  Covering doing 
crypto over some particular curves; and various particular implementation 
techniques.  Now that your program has found the magic curve - which, of 
course, you could much more easily have found by reading the patent, at least 
in principle - you can neither use nor sell products that do crypto over that 
magic curve, nor apply the particular techniques, without permission from the 
patent holder.

Given the state of patent law and how obviousness is currently interpreted, 
in practice, it's almost impossible to invalidate a patent on those grounds.  
(The patent holder would argue that, were it not for the existing patent, you 
would never have come up with just the right inputs to feed into your program 
to have it come up with the patented technique.  And, frankly, he'd be right.)  
Recent Supreme Court rulings may have brought some life back into 
obviousness, but it'll be a while before we know for sure.

Don't try to play games with patent law; you can get hurt.  A court, after 
likely invalidating your attempted hack-around, would then turn around and see 
it as evidence that your infringement was willful, perhaps even a deliberate 
attempt at patent fraud, and hit you up for additional damages.

The situation sucks, but we're stuck with it.

A few weeks ago, I would have said go ahead and do what PGP did, ignore the 
patent and distribute an open source version that will give the patent holder - 
who up until now hasn't been at all aggressive, with only one reported law suit 
- try to figure out how to get a handle on all the copies.  But the situation 
has changed with the financial problems of BlackBerry, which now owns the 
patents.  Those patents will go to *someone*, and if they end up with a patent 
troll, whose whole business model is based on finding people to go after, often 
in ways deliberately designed to make defending oneself economically 
infeasible.  They'll know how to make money off of your attempt at improving 
the world.  :-( We're just going to have to wait and see who ends up with those 
patents

-- Jerry

 
 —♯ƒ • François-René ÐVB Rideau •ReflectionCybernethics• http://fare.tunes.org
 *EULA: By reading or responding to this message you agree that all my stated
 or unstated opinions are correct.* EULA — patent pending.
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] FIPS, NIST and ITAR questions

2013-09-03 Thread Jerry Leichter
On Sep 3, 2013, at 3:16 PM, Faré fah...@gmail.com wrote:
 Can't you trivially transform a hash into a PRNG, a PRNG into a
 cypher, and vice versa?
No.

 hash-PRNG: append blocks that are digest (seed ++ counter ++ seed)
Let H(X) = SHA-512(X) || SHA-512(X)
where '||' is concatenation.  Assuming SHA-512 is a cryptographically secure 
hash H trivially is as well.  (Nothing in the definition of a cryptographic 
hash function says anything about minimality.)  But H(X) is clearly not useful 
for producing a PRNG.

If you think this is obviously wrong, consider instead:

H1(X) = SHA-512(X) || SHA-512(SHA-512(X))

Could you determine, just from black-box access to H1, that it's equally bad as 
a PRNG?  (You could certainly do it with about 2^256 calls to H1 with distinct 
inputs - by then you have a .5 chance of a duplicated top half of the output, 
almost certainly with a distinct bottom half.  But that's a pretty serious bit 
of testing)

I don't actually know if there exists a construction of a PRNG from a 
cryptographically secure hash function.  (You can build a MAC, but even that's 
not trivial; people tried all kinds of things that failed until the HMAC 
construction was proven correct.)
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Jerry Leichter
On Sep 1, 2013, at 6:06 PM, Perry E. Metzger wrote:
 We know what they spec for use by the rest of the US government in
 Suite B.
 
 http://www.nsa.gov/ia/programs/suiteb_cryptography/
 
  AES with 128-bit keys provides adequate protection for classified
  information up to the SECRET level. Similarly, ECDH and ECDSA using
  the 256-bit prime modulus elliptic curve as specified in FIPS PUB
  186-3 and SHA-256 provide adequate protection for classified
  information up to the SECRET level. Until the conclusion of the
  transition period defined in CNSSP-15, DH, DSA and RSA can be used
  with a 2048-bit modulus to protect classified information up to the
  SECRET level.
 
  AES with 256-bit keys, Elliptic Curve Public Key Cryptography using
  the 384-bit prime modulus elliptic curve as specified in FIPS PUB
  186-3 and SHA-384 are required to protect classified information at
  the TOP SECRET level. Since some products approved to protect
  classified information up to the TOP SECRET level will only contain
  algorithms with these parameters, algorithm interoperability between
  various products can only be guaranteed by having these parameters as
  options.
 
 We clearly cannot be absolutely sure of what they actually use, but
 we know what they procure commercially. If you feel this is all a big
 disinformation campaign, please feel free to give evidence for that. I
 certainly won't exclude the possibility, but I find it unlikely.
I'll make just a couple of comments:

- Given the huge amount of material classified these days, SECRET doesn't seem 
to be a very high level any more, whatever its official definition.  TOP SECRET 
still means a great deal though.  But the really important stuff is 
compartmented (SCI), and Suite B is not approved for it - it has to be 
protected by unpublished Suite A algorithms.

- To let's look at what they want for TOP SECRET.  First off, RSA - accepted 
for a transition period for SECRET, and then only with 2048 bit moduli, which 
until the last year or so were almost unknown in commercial settings - is 
completely out for TOP SECRET.  So clearly they're faith in RSA is gone.  (Same 
for DH and DSA.)  It looks as if they are betting that factoring and discrete 
logs over the integers aren't as hard as people had thought.

The whole business of AES-128 vs. AES-256 has been interesting from day one.  
Too many recommendations for using it are just based on some silly idea that 
bigger numbers are better - 128 bits is already way beyond brute force attacks. 
The two use the same transforms and the same key schedule.  The only clear 
advantage AES-256 has is 4 extra rounds - any attack against the basic 
algorithm would almost certainly apply to both.  On the other hand, many 
possible cracks might require significantly heavier computation for AES-256, 
even if the same fundamental attack works.  One wonders

NSA also wants SHA-384 - which is interesting given recent concerns about 
attacks on SHA-1 (which so far don't seem to extend to SHA-384).

I don't want to get into deep conspiracy and disinformation campaign theories.  
My read of the situation is that at the time NSA gave its approval to this 
particular combination of ciphers, it believed they were secure.  They seem to 
be having some doubts about RSA, DSA, and DH, though that could be, or could be 
justified as, ECC being as strong with much smaller, more practical, key 
lengths.

Now, imagine that NSA really did find a way in to AES.  If they were to 
suddenly withdraw approval for its use by the government, they would be 
revealing their abilities.  A classic conundrum:  How do you make use of the 
fruits of your cryptanalytic efforts without revealing that you've made 
progress?  England accepted bombing raids on major cities to keep their crack 
of Enigma secret.  So the continuation of such support tells us little.  What 
will be interesting to see is how long the support continues.  With work under 
way to replace SHA, a new version of the NSA recommendations will eventually 
have to be produced.  Will it, for example, begin a phase-out of AES-128 for 
SECRET communications in favor of requiring AES-256 there as well?  (Since 
there's no call so far to develop a cipher to replace AES, it would be 
difficult for NSA to recommend something else.)

It's indeed a wilderness of mirrors, and we can only guess.  But I'm very 
wary of using NSA's approval of a cipher as strong evidence, as the overall 
situation is complex and has so many tradeoffs.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Jerry Leichter
On Sep 1, 2013, at 10:35 PM, James A. Donald wrote:
 Meanwhile, on the authentication side, Stuxnet provided evidence that the 
 secret community *does* have capabilities (to conduct a collision attacks) 
 beyond those known to the public - capabilities sufficient to produce fake 
 Windows updates.
 
 Do we know they produced fake windows updates without assistance from 
 Microsoft?
For some version of know.  From 
http://arstechnica.com/security/2012/06/flame-malware-was-signed-by-rogue-microsoft-certificate/:

Microsoft released an emergency Windows update on Sunday after revealing that 
one of its trusted digital signatures was being abused to certify the validity 
of the Flame malware that has infected computers in Iran and other Middle 
Eastern Countries.

The compromise exploited weaknesses in Terminal Server, a service many 
enterprises use to provide remote access to end-user computers. By targeting an 
undisclosed encryption algorithm Microsoft used to issue licenses for the 
service, attackers were able to create rogue intermediate certificate 
authorities that contained the imprimatur of Microsoft's own root authority 
certificate—an extremely sensitive cryptographic seal. Rogue intermediate 
certificate authorities that contained the stamp were then able to trick 
administrators and end users into trusting various Flame components by falsely 
certifying they were produced by Microsoft

Based on the language in Microsoft's blog posts, it's impossible to rule out 
the possibility that at least one of the certificates revoked in the update was 
... created using [previously reported] MD5 weaknesses [which allowed collision 
attacks]. Indeed, two of the underlying credentials used MD5, while the third 
used the more advanced SHA-1 algorithm. In a Frequently Asked Questions section 
of Microsoft Security Advisory (2718704), Microsoft's security team also said: 
During our investigation, a third Certificate Authority has been found to have 
issued certificates with weak ciphers. The advisory didn't elaborate.

-- Jerry



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Jerry Leichter
On Sep 2, 2013, at 1:25 PM, Perry E. Metzger wrote:

 On Mon, 2 Sep 2013 00:06:21 -0400 Jerry Leichter leich...@lrw.com
 wrote:
 - To let's look at what they want for TOP SECRET.  First off, RSA -
 accepted for a transition period for SECRET, and then only with
 2048 bit moduli, which until the last year or so were almost
 unknown in commercial settings - is completely out for TOP SECRET.
 So clearly they're faith in RSA is gone.
 
 That is a misunderstanding.
 
 If you look at the way that the NSA specs these things, they try to
 keep all portions of a system of equal security so none is the weak
 point. A 2048 bit RSA key is factored vastly more easily than a 256
 bit AES key is brute forced (that's just public knowledge -- try doing
 the back of the envelope yourself) so that size key would be
 insufficient. However, a sufficiently large RSA key to be correctly
 sized for 256 bit AES is totally impractical for performance reasons,
 see:
 
 http://www.nsa.gov/business/programs/elliptic_curve.shtml
a)  The very reference you give says that to be equivalent to 128 bits 
symmetric, you'd need a 3072 bit RSA key - but they require a 2048 bit key.  
And the same reference says that to be equivalent to 256 bits symmetric, you 
need a 521 bit ECC key - and yet they recommend 384 bits.  So, no, even by that 
page, they are not recommending equivalent key sizes - and in fact the page 
says just that.

b)  Those comparisons long ago became essentially meaningless.  On the 
symmetric size, it's using brute force attack strengths.  But no one is going 
to brute force a 128-bit key with any known or suggested technology, and brute 
force attacks against 256-bit keys are way beyond what physics says is even 
remotely possible.  (I posted on this a long time back:  Any theory even 
vaguely consistent with what we know about quantum mechanics places a limit on 
the number of elementary bit flips in a finite volume of space-time.  If you 
want an answer in 100 years, your computer is at most a sphere in space-time 
100 light-years cubed by 100 years in diameter - and that's a gross 
overestimate.  My quick calculation showed that the quantum limit for that 
sphere is not far above 128 bits.)

In any real terms, *if you're talking brute force*, 128 bits and 256 bits - and 
a million bits, if you want to go nuts about it - are indistinguishable.

For the other columns, they don't say where the difficulty estimate comes from. 
(You could get a meaningless estimate by requiring that the number of primes of 
the size quoted be equivalent to the number of symmetric keys, but I'm assuming 
they're being more intelligent about the estimate than that, as a brute force 
attack on primes makes no sense at all.  What makes more sense - and what they 
are presumably using - is the number of operations needed by the best known 
algorithm.  But now we're at point of comparing impossible attacks against 128- 
and 256-bit symmetric keys with impossible attacks against 3072- or 15360-bit 
RSA keys - a waste of time.  The relevant point is that attacks against RSA 
keys have been getting better faster than predicted, while the best publicly 
known attacks against AES have barely moved the needle from simple brute force.

Given *currently publicly known algorithms*, a 2048 bit RSA key is still 
secure.  (The same page shows that as equivalent to a 112-bit symmetric key, 
which is not only beyond any reasonable-term brute force attack, but longer 
than the keys used - according to some reports, anyway - on some Suite A 
algorithms.)

 So clearly the purpose of pushing ECC for this application is that
 they want the public key algorithm and its key size to have comparable
 security while both performing reasonably well.
 (Same for DH and DSA.)
 It looks as if they are betting that factoring and discrete logs
 over the integers aren't as hard as people had thought.
And here we actually agree.  Note that I didn't say there was any evidence that 
NSA was ahead of the public state of the art - even given the public state of 
the art and the rate that it's advancing, using Z/p as a field is rapidly 
fading as a realistic alternative.  NSA, looking forward, would be making the 
recommendation to move to elliptic curves whether or not they could do better 
than the public at large.  So we can't read much into that aspect of it.  
However, note (a) that if NSA does have a theoretical breakthrough, factoring 
is probably more likely than AES - we know they've hired many people in related 
fields over many years, and even in public the state of the art has been 
advancing; (b) most of the Internet is way behind recommendations that are now 
out there for everyone.  Google recently switched to 2048 bit keys; hardly any 
other sites have done so, and some older software even has trouble talking to 
Google as a result.

 Not at all, and the rationale is public and seen above.
 
 I believe you're incorrectly claiming that we know much less than we

Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Jerry Leichter
 Do we know they produced fake windows updates without assistance
 from Microsoft?
 
 Given the reaction from Microsoft, yes.
 
 The Microsoft public affairs people have been demonstrating real
 anger at the Flame attack in many forums.
 
 ...Clearly, as things like bad vendor drivers updates have been sent out
 using stolen keys in the past, and clearly vendors might simply make
 mistakes in the future

Except that that's not what happened in this case.

Someone took an old, valid Microsoft license - which should never have been 
issued, and which was blocked on Vista and Windows 7.  They worked around the 
block using a technique that required the ability to produce MD5 collisions, 
which allowed them to spoof Windows Update.  All the details are at 
http://trailofbits.files.wordpress.com/2012/06/flame-md5.pdf.

A cryptographic approach for producing chosen-prefix collisions in MD5 was 
presented at CCC in 2008, with a cost estimate of about $20K on a 2008 Amazon 
EC2 cluster - the authors showed a POC using a cluster of PS3's.  Open source 
code to implement the attack was published in 2009.

However, the form of the collision apparently didn't match the published code, 
nor, more fundamentally, the theoretical work that made it possible.  Someone 
has a *different*, so far nowhere-published attack.  The comment that this 
required world-class cryptanalysis came from the developer of the published 
chosen-prefix attack, Marc Stevens.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Jerry Leichter
On Sep 1, 2013, at 2:36 AM, Peter Gutmann wrote:

 John Kelsey crypto@gmail.com writes:
 
 If I had to bet, I'd bet on bad rngs as the most likely source of a
 breakthrough in decrypting lots of encrypted traffic from different sources.
 
 If I had to bet, I'd bet on anything but the crypto.  Why attack when you can
 bypass [1].
Well, sure.  But ... I find it hard to be quite so confident.

In practical terms, the vast majority of encrypted data in the world, whether 
in motion or at rest, is protected by one of two algorithms:  RSA and AES.  In 
some cases, RSA is used to encrypt AES keys, so an RSA break amounts to a 
bypass of AES.  If you want to consider signatures and authentication, you come 
back to RSA again, and add SHA-1.

This is not to say there aren't other techniques out there, or that new ones 
aren't being developed.  But to NSA it's clearly a game of numbers - and any 
kind of wedge into either of just two algorithms would expose huge amounts of 
traffic to interception.

Meanwhile, on the authentication side, Stuxnet provided evidence that the 
secret community *does* have capabilities (to conduct a collision attacks) 
beyond those known to the public - capabilities sufficient to produce fake 
Windows updates.  And recent evidence elsewhere (e.g., using a bug in the 
version of Firefox in the Tor Browser Bundle) has shown an interest and ability 
to actively attack systems.  (Of course, being able to decrypt information 
without an active attack is always the ideal, as it leaves no traces.)

I keep seeing statements that modern cryptographic algorithms are secure, 
don't worry - but if you step back a bit, it's really hard to justify such 
statements.  We *know*, in a sense, that RSA is *not* secure:  Advances in 
factoring have come faster than expected, so recommended key sizes have also 
been increasing faster than expected.  Most of the world's sites will always be 
well behind the recommended sizes.  Yes, we have alternatives like ECC, but 
they don't help the large number of sites that don't use them.

Meanwhile, just what evidence do we really have that AES is secure?  It's 
survived all known attacks.  Good to know - but consider that until the 
publication of differential cryptanalysis, the public state of knowledge 
contained essentially *no* generic attacks newer than the WW II era attacks on 
Enigma.  DC, and to a lesser degree linear cryptanalysis not long after, 
rendered every existing block cipher (other than DES, which was designed with 
secret knowledge of DC) obsolete in one stroke.  There's been incremental 
progress since, but no breakthrough of a similar magnitude - in public.  Is 
there really anything we know about AES that precludes the possibility of such 
a breakthrough?

There's a fundamental question one should ask in designing a system:  Do you 
want to protect against targeted attacks, or do you want to protect against 
broad fishing attacks?

If the former, the general view is that if an organization with the resources 
of the NSA wants to get in, they will - generally by various kinds of bypass 
mechanisms.

Of the latter, the cryptographic monoculture *that the best practices insist 
on* - use standard protocols, algorithms and codes; don't try to invent or even 
implement your own crypto; design according to Kirchoff's principle that only 
the key is secret - are exactly the *wrong* advice:  You're allowing the 
attacker to amortize his attacks on you with attacks on everyone else.

If I were really concerned about my conversations with a small group of others 
being intercepted as part of dragnet operations, I'd design my own small 
variations on existing protocols.  Mix pre-shared secrets into a DH exchange to 
pick keys.  Use simple steganography to hide a signal in anything being signed 
- if something shows up signed without that signal, I'll know (a) it's not 
valid; (b) someone has broken in.  Modify AES in some way - e.g., insert an XOR 
with a separate key between two rounds.  A directed attack would eventually 
break all this, but generic attacks would fail.  (You could argue that the 
failure of generic attacks would cause my connections to stand out and thus 
draw attention.  This is, perhaps, true - it depends on the success rate of the 
generic attacks, and on how many others are playing the same games I am.  
There's no free lunch.)

It's interesting that what what little evidence we have about NSA procedures - 
from the design of Clipper to Suite B - hints that they deploy multiple 
cryptosystems tuned to particular needs.  They don't seem to believe in a 
monoculture - at least for themselves.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Jerry Leichter

On Sep 1, 2013, at 2:11 PM, Perry E. Metzger wrote:

 On Sun, 1 Sep 2013 07:11:06 -0400 Jerry Leichter leich...@lrw.com
 wrote:
 Meanwhile, just what evidence do we really have that AES is
 secure?
 
 The fact that the USG likes using it, too.
We know they *say in public* that it's acceptable.  But do we know what they 
*actually use*?

 
 That's also evidence for eliptic curve techniques btw.
Same problem.
-- Jerry

 Perry
 -- 
 Perry E. Metzger  pe...@piermont.com

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-08-31 Thread Jerry Leichter
On Aug 31, 2013, at 2:02 PM, Ray Dillinger wrote:
 ...  It is both
 interesting and peculiar that so little news of quantum computing has been
 published since.
I don't understand this claim.  Shor's work opened up a really hot new area 
that both CS people and physicists (and others as well) have rapidly jumped 
into.  There's been a huge amount of publication on quantum computing and, more 
generally, the field of quantum information.  No one - at least publicly - 
claims to know how to build a non-toy quantum computer here (the D-wave 
machine, if it's really doing quantum computation, is a special kind of machine 
and couldn't run Shor's algorithm, for example).  But there are many reported 
advances on the physics.  Simultaneously, there's quite a bit of published work 
on the algorithmic/complexity side as well.

A look at http://en.wikipedia.org/wiki/Quantum_computer will readily confirm 
this.  If you want to dig deeper, there's Scott Aaronson's blog at 
http://www.scottaaronson.com/blog/

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] NSA and cryptanalysis

2013-08-30 Thread Jerry Leichter
So the latest Snowden data contains hints that the NSA (a) spends a great deal 
of money on cracking encrypted Internet traffic; (b) recently made some kind of 
a cryptanalytic breakthrough.  What are we to make of this?  (Obviously, this 
will all be wild speculation unless Snowden leaks more specific information - 
which wouldn't fit his style, at least as demonstrated so far.)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The Case for Formal Verification

2013-08-30 Thread Jerry Leichter
On Aug 29, 2013, at 7:00 PM, Phillip Hallam-Baker wrote:
 ...The code synthesis scheme I developed was an attempt to address the 
 scaling problem from the other end. The idea being that to build a large 
 system you create a very specific programming language that is targeted at 
 precisely that class of problems. Then you write a back end that converts the 
 specification into code for that very restricted domain. If you want a formal 
 proof you have the synthesizer generate it from the specification as well. 
 That approach finesses the problem of having to validate the synthesizer 
 (which would likely take category theory) because only the final target code 
 need be validated
Many years back, I did some work with Naftaly Minsky at Rutgers on his idea of 
law-governed systems.  The idea was that you have an operational system that 
is wrapped within a law enforcement system, such that all operations relevant 
to the law have to go through the enforcer.  Then you write the law to 
specify certain global properties that the implementation must always exhibit, 
and leave the implementation to do what it likes, knowing that the enforcer 
will force it to remain within the law.

You can look at this in various ways in modern terms:  As a generalized 
security kernel, or as the reification of the attack surface of the system.

Minsky's interests were more on the software engineering side, and he and a 
couple of grad students eventually put together a law-governed software 
development environment, which could control such things as how modules were 
allowed to be coupled.  (The work we did together was on an attempt to add a 
notion of obligations to the law, so that you could not just forbid certain 
actions, but also require others - e.g., if you receive message M, you must 
within t seconds send a response; otherwise the law enforcer will send one for 
you.  I'm not sure where that went after I left Rutgers.)

While we thought this kind of thing would be useful for specifying and proving 
security properties, we never looked at formal proofs.  (The law of the system 
was specified in Prolog.  We stuck to a simple subset of the language, which 
could probably have been handled easily by a prover.  Still, hardly transparent 
to most programmers!)
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-29 Thread Jerry Leichter

On Aug 28, 2013, at 11:03 AM, Jonathan Thornburg wrote:

 On Wed, 28 Aug 2013, Jerry Leichter wrote:
 On the underlying matter of changing my public key:  *Why* would I have
 to change it?  It's not, as today, because I've changed my ISP or employer
 or some other random bit of routing information - presumably it's because
 my public key has been compromised.
 
 Maybe it's because you've forgotten the passphrase guarding the
 corresponding private key?
 
 Or because you'd like to do the electronic equivalent of change my name,
 start [this facet of] my electronic life over?
The point of my question was that for different reasons for changing the public 
key, there are different issues and different potential responses.

- If I need to change because the private key was compromised, there's nothing 
I can do about past messages; the question is what I do to minimize the number 
of new messages that will arrive with a now-known-insecure key.  This was the 
case I assumed the previous poster was concerned with.
- If I lost the private key, all previous messages remain secure - except they 
are now, unfortunately, secure against me as well :-(.  New messages sent with 
the key will be unreadable, but if I am in a position to determine who sent 
them, I can tell them to re-send with a different key.  If the system is set up 
so that even return information is encrypted, I'll have to rely on my 
correspondent's realizing they need to re-send via some other mechanism.  (It 
could be through whatever revocation mechanism the system has; it could be 
through mail I send to everyone I correspond with; it could be through a phone 
call, or just by word of mouth.  The sender will have to check the dates and 
realize that some message was sent recently enough that I probably couldn't 
decrypt it.)
- As I outlined things, there was never a reason you couldn't have multiple 
public keys, and in fact it would be a good idea to make traffic analysis 
harder.  Adding a new key for a new facet of your electronic life is trivial.

-- Jerry

 
 -- 
 -- Jonathan Thornburg jth...@astro.indiana.edu
   Dept of Astronomy  IUCSS, Indiana University, Bloomington, Indiana, USA
   There was of course no way of knowing whether you were being watched
at any given moment.  How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork.  It was even conceivable
that they watched everybody all the time.  -- George Orwell, 1984
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Separating concerns

2013-08-29 Thread Jerry Leichter
On Aug 28, 2013, at 2:04 PM, Faré wrote:
 My target audience, like Perry's is people who simply can't cope with 
 anything more complex than an email address. For me secure mail has to look 
 feel and smell exactly the same as current mail. The only difference being 
 that sometime the secure mailer will say 'I can't contact that person 
 securely right now because…'
 
 I agree with Perry and Phill that email experience should be
 essentially undisturbed in the normal case, though it's OK to add an
 additional authorization step.
 
 One thing that irks me, though, is the problem of the robust, secure
 terminal: if everything is encrypted, how does one survive the
 loss/theft/destruction of a computer or harddrive? I'm no ignoramus,
 yet I have, several times, lost data I cared about due to hardware
 failure or theft combined with improper backup. How is a total newbie
 to do?
This is a broader problem, actually.  If you've ever had to take care of 
someone's estate, you'll know that one of the problems is contacting all the 
banks, other financial institutions, service providers, and other such parties 
they dealt with in life.  My experience dealing with my father's estate - a 
fairly simple one - was that having the *paper* statements was the essential 
starting point.  (Even so, finding his safe deposit box - I had the unlabeled 
keys - could have been a real pain if my sister didn't remember which bank it 
was at.)  Had he been getting email statements, just finding his mail accounts 
- and getting access to them - could have been a major undertaking.  Which is 
one reason I refuse to sign up for email statements ... just send me the paper, 
thank you.  (This is getting harder all the time.  I expect to start getting 
charged for paper statements any time now.)

Today at least, my executor, in principle, work with the mail provider to get 
access.  But for truly secure mail, my keys presumably die with me, and it's 
all gone.

You don't even have to consider the ultimate loss situation.  If I'm 
temporarily disabled and can't provide my keys - how can someone take care of 
my bills for me?

We can't design a system that can handle every variation and eventuality, but 
if we're going to design one that we intend to be broadly used, we have to 
include a way to handle the perfectly predictable, if unpleasant to think 
about, aspects of day to day life.  Absolute security *creates* new problems as 
it solves old ones.  There may well be aspects to my life I *don't* want 
revealed after I'm gone.  But there are many things I *do* want to be easily 
revealed; my heirs will have enough to do to clean up after me and move on as 
it is.

So, yes, we have to make sure we have backup mechanisms - as well as key escrow 
systems, much as the term key escrow was tainted by the Clipper experience.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-28 Thread Jerry Leichter

On Aug 28, 2013, at 4:24 AM, danimoth wrote:

 On 27/08/13 at 10:05pm, Christian Huitema wrote:
 Suppose, as in Bitcoin, my email address *is* my public key
 
 You can even use some hash compression tricks so you only need 9 or 10 
 characters to express the address as hash of the public key. 
 
 That works very well, until you have to change the public key.
 
 .. and until someone want to find a public key which shares the first 
 10 digits of the hash with yours.
9 or 10 *characters*, not *digits*.  You need enough bits that, even given the 
birthday paradox, the probability of this occurring is low enough not to 
matter.  Since the birthday paradox will lead to a 50% probability of collision 
after about the square root of the number of possible values, given a 
10-character signature, that's at about 5 characters.  Way too low, for digits. 
 If characters are full bytes, 2^40 generated public keys is plausible, 
though perhaps uncomfortably small; and if the characters have to be 
printable - then I agree, way too low.

You could use hash compression, but the retained compressed values will have to 
be rather larger.  Say 150 bits worth, at least.

On the underlying matter of changing my public key:  *Why* would I have to 
change it?  It's not, as today, because I've changed my ISP or employer or some 
other random bit of routing information - presumably it's because my public key 
has been compromised.  That's a disaster no matter how I identify myself, one 
that's very difficult to recover from - pretty much impossible unless (a) 
there's some way to revoke a key (yes, we've had problems with getting that to 
work even in the current PKI environment, but there's no real alternative); (b) 
I've prepared for the eventuality.  Given (a), I can send out a signed 
revocation message.  (So can the attacker, but presumably he had bigger plans 
for the key than just killing it.)  Given (b), I have pre-shared one or more 
replacement keys that I still trust, and my revocation can name the one to put 
into use.  (Of course, it cannot introduce a brand new key!)  Done this way, my 
response to key compromise is no different from normal key 
 rollover.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why human-readable IDs (was Re: Email and IM are ideal candidates for mix networks)

2013-08-28 Thread Jerry Leichter
A different take on the problem:  Would something built around identify-based 
encryption help here?  It sounds very tempting:  My email address (or any other 
string - say a bitmap of a picture of me) *is* my public key.  The problem is 
that it requires a central server that implicitly has access to my private key. 
There are some proposals around to work around that (e.g., by constructing the 
key from a combination of keys from different key generators).  But we could go 
another route:  I can run a key generator on my own hardware.  That doesn't 
quite solve the problem, since you now need a secure way to find my key 
generator - any generator will happily tell you how to encrypt using 
leich...@lrw.com to generate the public key, and *it* will have the 
corresponding private key.

I don't quite see how to make this work, but IBE seems like a primitive that 
might be helpful, somehow.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why not the DNS? (was Re: Implementations, attacks on DHTs, Mix Nets?)

2013-08-28 Thread Jerry Leichter
On Aug 28, 2013, at 8:34 AM, Perry E. Metzger wrote:

 On Tue, 27 Aug 2013 23:39:51 -0400 Jerry Leichter leich...@lrw.com
 wrote:
 It's not as if this isn't a design we have that we know works:
 DNS.
Read what I said:  There's a *design* that works.

I never suggested *using DNS* - either its current physical instantiation, or 
even necessarily the raw code.  In fact, I pointed out some of the very 
problems you mention.

What defines the DNS model - and is in contrast to the DHT model - is:

- Two basic classes of participants, those that track potentially large amounts 
of data and respond to queries and those that simply cache for local use;
- Caching of responses for authoritative-holder-limited amounts of time to 
avoid re-querying;
- A hierarchical namespace and a corresponding hierarchy of caches.

DNS and DNSSEC as implemented assume a single hierarchy, and they map the 
hierarchy to authority.  These features are undesirable and should be avoided.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


  1   2   3   >