Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Perry E. Metzger
On Sat, 31 Aug 2013 17:00:01 -0400 John Kelsey crypto@gmail.com
wrote:
 If I had to bet, I'd bet on bad rngs as the most likely source of a
 breakthrough in decrypting lots of encrypted traffic from different
 sources. 

This seems by far the most probable conclusion. Note, for example,
Heninger et al's recent work on the Taiwanese national smartcards. A
discovery that some commonly used randomness sources are dramatically
less random than supposed could dramatically lower the work factor on
an otherwise brute force attack.

That said, we simply can't know, and I think excessive speculation on
the basis of no actual concrete information isn't that productive.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Jerry Leichter
On Sep 1, 2013, at 2:36 AM, Peter Gutmann wrote:

 John Kelsey crypto@gmail.com writes:
 
 If I had to bet, I'd bet on bad rngs as the most likely source of a
 breakthrough in decrypting lots of encrypted traffic from different sources.
 
 If I had to bet, I'd bet on anything but the crypto.  Why attack when you can
 bypass [1].
Well, sure.  But ... I find it hard to be quite so confident.

In practical terms, the vast majority of encrypted data in the world, whether 
in motion or at rest, is protected by one of two algorithms:  RSA and AES.  In 
some cases, RSA is used to encrypt AES keys, so an RSA break amounts to a 
bypass of AES.  If you want to consider signatures and authentication, you come 
back to RSA again, and add SHA-1.

This is not to say there aren't other techniques out there, or that new ones 
aren't being developed.  But to NSA it's clearly a game of numbers - and any 
kind of wedge into either of just two algorithms would expose huge amounts of 
traffic to interception.

Meanwhile, on the authentication side, Stuxnet provided evidence that the 
secret community *does* have capabilities (to conduct a collision attacks) 
beyond those known to the public - capabilities sufficient to produce fake 
Windows updates.  And recent evidence elsewhere (e.g., using a bug in the 
version of Firefox in the Tor Browser Bundle) has shown an interest and ability 
to actively attack systems.  (Of course, being able to decrypt information 
without an active attack is always the ideal, as it leaves no traces.)

I keep seeing statements that modern cryptographic algorithms are secure, 
don't worry - but if you step back a bit, it's really hard to justify such 
statements.  We *know*, in a sense, that RSA is *not* secure:  Advances in 
factoring have come faster than expected, so recommended key sizes have also 
been increasing faster than expected.  Most of the world's sites will always be 
well behind the recommended sizes.  Yes, we have alternatives like ECC, but 
they don't help the large number of sites that don't use them.

Meanwhile, just what evidence do we really have that AES is secure?  It's 
survived all known attacks.  Good to know - but consider that until the 
publication of differential cryptanalysis, the public state of knowledge 
contained essentially *no* generic attacks newer than the WW II era attacks on 
Enigma.  DC, and to a lesser degree linear cryptanalysis not long after, 
rendered every existing block cipher (other than DES, which was designed with 
secret knowledge of DC) obsolete in one stroke.  There's been incremental 
progress since, but no breakthrough of a similar magnitude - in public.  Is 
there really anything we know about AES that precludes the possibility of such 
a breakthrough?

There's a fundamental question one should ask in designing a system:  Do you 
want to protect against targeted attacks, or do you want to protect against 
broad fishing attacks?

If the former, the general view is that if an organization with the resources 
of the NSA wants to get in, they will - generally by various kinds of bypass 
mechanisms.

Of the latter, the cryptographic monoculture *that the best practices insist 
on* - use standard protocols, algorithms and codes; don't try to invent or even 
implement your own crypto; design according to Kirchoff's principle that only 
the key is secret - are exactly the *wrong* advice:  You're allowing the 
attacker to amortize his attacks on you with attacks on everyone else.

If I were really concerned about my conversations with a small group of others 
being intercepted as part of dragnet operations, I'd design my own small 
variations on existing protocols.  Mix pre-shared secrets into a DH exchange to 
pick keys.  Use simple steganography to hide a signal in anything being signed 
- if something shows up signed without that signal, I'll know (a) it's not 
valid; (b) someone has broken in.  Modify AES in some way - e.g., insert an XOR 
with a separate key between two rounds.  A directed attack would eventually 
break all this, but generic attacks would fail.  (You could argue that the 
failure of generic attacks would cause my connections to stand out and thus 
draw attention.  This is, perhaps, true - it depends on the success rate of the 
generic attacks, and on how many others are playing the same games I am.  
There's no free lunch.)

It's interesting that what what little evidence we have about NSA procedures - 
from the design of Clipper to Suite B - hints that they deploy multiple 
cryptosystems tuned to particular needs.  They don't seem to believe in a 
monoculture - at least for themselves.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Functional specification for email client?

2013-09-01 Thread Ray Dillinger

On 08/31/2013 02:53 PM, John Kelsey wrote:

I think it makes sense to separate out the user-level view of what happens.


True.  I shouldn't have muddied up user-side view with notes about
packet forwarding, mixing, cover traffic, and domain lookup, etc.
Some users (I think) will want to know that much in general terms,
in order to have some basis to evaluate/understand the security
promises, but it's not part of the interface. Only the serious crypto
wonks will want to know in more detail.

{note: how strange! my spell checker thinks crypto is a typo, but
has no problem with wonks!}


If something arrives in my inbox with a from address of nob...@nowhere.com,
then I need to know that this means that's who it came from.  If I mail
something to nob...@nowhere.com, then I need to know that only the owner
of that address will be able to read it.


As I consider it, I'm thinking even that promise needs to be amended
to include the possibility of leaking from the recipient.  For example
email forwarding, unencrypted mail archives found by hackers, etc.


My intuition is that binding the security promises to email addresses
instead of identities is the right way to proceed.  I think this is
something most people can understand, and more importantly it's
somethingwe can do with existing technology and no One True Name
Authority In The Sky handing out certs.


Eggs Ackley.  I believe every user in the world is familiar at this point
with the idea of an email alias, and that the concept maps reasonably
well to holder of a key for crypto purposes.  To promise any more
than that about identity requires centralized infrastructure that
cannot really exist in a pure P2P system.


One side issue here is that this system's email address space needs
to somehow coexist with the big wide internet's address space.  It
will really suck if someone else can get my gmail address n the
secure system, but it will also be confusing if my inbox has a
random assortment of secure and insecure emails, and I have to do
some extra step to know which is which.


If you want to gateway secure mail into the same bucket with
insecure mail, I guess you can do that; I would far rather have
separate instances of mail clients that do not mix types. eg,
this is Icedove/P2P, and this is Icedove/SMTP, and they are not
expected to be able to interchange messages without some gateway.

That said, all you need to gateway secure mail into an SMTP system
is easy to construct.  Consider if the peer mail system has an
address structure like name**domain:

You have a machine with DNS/SMTP address like secure.peermail.
com to reserve the name and provide bounce messages that prompt
people to get a peer mail client and send a message in that
client to name**domain for whatever address someone tried to
reply to. Mail imported from the peer mail client with a
name*domain mail format, could show in an SMTP client as
name**dom...@secure.peermail.com.

Alternatively, or additionally, you could have a machine with
an address like insecure.peermail.com that actually does
protocol translation and forwards SMTP mail onto the secure
network and vice versa, and allow peer mail users to choose
which machine handles their SMTP-translated address. But
this has the same problems as Lavabit and Silent Circle,
which recently shut down under duress.

Dual-protocol mail clients could use name**domain on the
peer network directly.  Mail imported from the SMTP network
on a dual-protocol client or on a peer mail client could
appear as n...@address.com**INSECURE-SMTP or similar, and on
the dual-protocol client a direct reply would prompt use of
the insecure protocol after a warning prompt.  On a secure-
protocol client it would simply prompt the user to use an
insecure mail client, same as the bounce message on the
other side.

I see the Big Wide Internet's address space as a simple tool to
implement it, not as a conflicting thing that needs reconciled.

The domain lookup as I envision it would associate mail peer email
addresses with a tuple of IPv6 address and public key.  The public
keys are stable; the IPv6 address may appear and disappear (and may
be different each time) as the user connects and disconnects from the
system.  The presumption is that the mail peer daemon on the local
machine sends a routing update message when starting up, and
possibly another (deleting routing information) in an orderly
shutdown.

As stated earlier, the system makes no effort to actively hide the
machine where an email address is located.  It could be a machine
designated to receive and keep mail for that address until it gets
a private address update that tells it where to send the messages
but which is not propagated; even in that case, the designated
maildrop machine if not controlled by the holder of the address
cannot be considered to hold any real secrets.

Routing update messages propagate across the network of relevant domain
servers, which check the sig on the update against the 

Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread John Kelsey
What I think we are worried about here are very widespread automated attacks, 
and they're passive (data is collected and then attacks are run offline).  All 
that constrains what attacks make sense in this context.  You need attacks that 
you can run in a reasonable time, with minimal requirements on the amount of 
plaintext or the specific values of plaintext.  The perfect example of an 
attack that works well here is a keysearch on DES; another example is the 
attack on WEP.

All the attacks we know of on reduced-round AES and AES-like ciphers require a 
lot of chosen plaintexts, or related key queries, or both.  There is no way to 
completely rule out some amazing new break of AES that makes the cipher fall 
open and drop your plaintext in the attacker's lap, but I don't see anything at 
all in the literature that supports that fear, and there are a *lot* of smart 
people trying to find new ways to attack or use AES-like designs.  So I put 
this at the bottom of my list of likely problems.

Some attacks on public key systems also require huge numbers of encryptions or 
specially formed ciphertexts that get sent to the target for decryption--we can 
ignore those for this discussion.  So we're looking at trying to factor an RSA 
modulus or to examine a lot of RSA encryptions to a particular public key (and 
maybe some signatures from that key) and try to get somewhere from that.  I 
don't know enough about the state of the art in factoring or attacking RSA to 
have a strong intuition about how likely this is.  I'm pretty skeptical, 
though--the people. know who are experts in this stuff don't seem especially 
worried.  However, a huge breakthrough in factoring would make for workable 
passive attacks of this kind, though it would have to be cheap enough to use to 
break each user's public key separately.  

Finally, we have the randomness sources used to generate RSA and AES keys.  
This, like symmetric cryptanalysis, is an area I know really well.  And my 
intuition (backed by plenty of examples) is that this is probably the place 
that is most likely to yield a practical offline attack of this kind.  When 
someone screws up the implementation of RSA or AES, they may at least notice 
some interoperability problems.  They will never notice this when they screw up 
their implementation so that RNG only gets 32 bits of entropy before generating 
the user's RSA keypair.  And if I know that your RSA key is likely to have one 
of these 2^{32} factors, I can make a passive attack work really well.  

Comments?

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts about keys

2013-09-01 Thread Ben Laurie
On 25 August 2013 21:29, Perry E. Metzger pe...@piermont.com wrote:

 [Disclaimer: very little in this seems deeply new, I'm just
 mixing it up in a slightly different way. The fairly simple idea I'm
 about to discuss has germs in things like SPKI, Certificate
 Transparency, the Perspectives project, SSH, and indeed dozens of
 other things. I think I even suggested a version of this exact idea
 several times in the past, and others may have as well. I'm not going
 to pretend to make claims of real originality here, I'm more
 interested in thinking about how to get such things quite widely
 deployed, though it would be cool to hear about prior art just in case
 I decide to publish a tech report.]

 One element required to get essentially all messaging on the
 Internet end to end encrypted is a good way to find out what people's
 keys are.

 If I meet someone at a reception at a security conference, they might
 scrawl their email address (al...@example.org) for me on a cocktail
 napkin.

 I'd like to be able to then write to them, say to discuss their
 exciting new work on evading censorship of mass releases of stolen
 government documents using genetically engineered fungal spores to
 disseminate the information in the atmosphere worldwide.

 However, in our new everything is always encrypted world, I'll be
 needing their encryption key, and no one can remember something as
 long as that.

 So, how do I translate al...@example.org into a key?

 Now, the PGP web-of-trust model, which I think is broken, would have
 said check a key server, see if there's a reasonable trust path
 between you and Alice.

 I have an alternative suggestion.

 Say that we have a bunch of (only vaguely) trustworthy organizations
 out in the world. (They can use crypto based log protocols of various
 kinds to make sure you don't _really_ need to trust them, but for the
 moment that part doesn't matter.)

 Say that Alice, at some point in the past, sent an email message,
 signed in her own key, to such an organization's key server, saying in
 effect this is al...@example.org's key.

 At intervals, the trustworthy organization (and others like it) can
 send out email messages to Alice, encrypted in said key, saying Hi
 there! Please reply with a message containing this magic cookie,
 encrypted in our key, signed in yours.

 If a third party needing the key for al...@example.org queries the
 vaguely trusted server, it will then give them information like For
 the past six years, we've been sending al...@example.org emails every
 couple of weeks asking her to reply to demonstrate she controls a
 particular public key, and she always has -- new keys have always been
 signed in the old one, too. Here's a log, including various sorts of
 widely witnessed events and hash chains so that if we were lying about
 this we had to be planning to lie about it for a very long time.

 Now of course, in the real world, who wants to go through the effort
 of hand replying to query messages to establish a key over time?
 Instead, Alice has some actually trusted software running on her
 computer at home.

 She can either leave it to automatically do IMAP queries against her
 mailbox (which could even be GMail or what have you) and reply on her
 behalf, or it could ask her to unlock her key while she's at home in
 the morning having her coffee. However, I think the former is actually
 preferable. We'd rather have an imperfect system that is effortless to
 use but can be bypassed by physically breaking in to someone's home.
 (After all if you did that you probably can bug Alice's hardware
 anyway.)

 Alice probably also needs to make sure someone isn't spoofing her
 replies by accessing her IMAP box and replying for her (using a key
 known to the attacker but presumably not to Alice) and then deleting
 the query, but the mere absence of periodic pings from the trusted
 party may be enough to create suspicion, as might doing one's own
 queries against the trusted parties and noticing that the key isn't
 your own.

 Presumably, of course, there should be a bunch of such servers out
 there -- not so many that the traffic becomes overwhelming, but not so
 few that it is particularly feasible to take the system off the
 air. (One can speculate about distributed versions of such systems as
 well -- that's not today's topic.)

 So, this system has a bunch of advantages:

 1) It doesn't require that someone associated with administrators of
 the domain name you're using for email has to cooperate with deploying
 your key distribution solution. Alice doesn't need her managers to agree
 to let her use the system -- her organization doesn't even need to
 know she's turned it on. Yet, it also doesn't allow just anyone to
 claim to be al...@example.org -- to put in a key, you have to show you
 can receive and reply to emails sent to the mailbox.

 2) You know that, if anyone is impersonating Alice, they had to have
 been planning it for a while. In general, this is 

Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Perry E. Metzger
On Sun, 1 Sep 2013 07:11:06 -0400 Jerry Leichter leich...@lrw.com
wrote:
 Meanwhile, just what evidence do we really have that AES is
 secure?

The fact that the USG likes using it, too.

That's also evidence for eliptic curve techniques btw.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Jerry Leichter

On Sep 1, 2013, at 2:11 PM, Perry E. Metzger wrote:

 On Sun, 1 Sep 2013 07:11:06 -0400 Jerry Leichter leich...@lrw.com
 wrote:
 Meanwhile, just what evidence do we really have that AES is
 secure?
 
 The fact that the USG likes using it, too.
We know they *say in public* that it's acceptable.  But do we know what they 
*actually use*?

 
 That's also evidence for eliptic curve techniques btw.
Same problem.
-- Jerry

 Perry
 -- 
 Perry E. Metzger  pe...@piermont.com

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-01 Thread Perry E. Metzger
On Sun, 1 Sep 2013 16:33:56 -0400 Jerry Leichter leich...@lrw.com
wrote:
 
 On Sep 1, 2013, at 2:11 PM, Perry E. Metzger wrote:
 
  On Sun, 1 Sep 2013 07:11:06 -0400 Jerry Leichter
  leich...@lrw.com wrote:
  Meanwhile, just what evidence do we really have that AES is
  secure?
  
  The fact that the USG likes using it, too.
 We know they *say in public* that it's acceptable.  But do we know
 what they *actually use*?

We know what they spec for use by the rest of the US government in
Suite B.

http://www.nsa.gov/ia/programs/suiteb_cryptography/

  AES with 128-bit keys provides adequate protection for classified
  information up to the SECRET level. Similarly, ECDH and ECDSA using
  the 256-bit prime modulus elliptic curve as specified in FIPS PUB
  186-3 and SHA-256 provide adequate protection for classified
  information up to the SECRET level. Until the conclusion of the
  transition period defined in CNSSP-15, DH, DSA and RSA can be used
  with a 2048-bit modulus to protect classified information up to the
  SECRET level.

  AES with 256-bit keys, Elliptic Curve Public Key Cryptography using
  the 384-bit prime modulus elliptic curve as specified in FIPS PUB
  186-3 and SHA-384 are required to protect classified information at
  the TOP SECRET level. Since some products approved to protect
  classified information up to the TOP SECRET level will only contain
  algorithms with these parameters, algorithm interoperability between
  various products can only be guaranteed by having these parameters as
  options.

We clearly cannot be absolutely sure of what they actually use, but
we know what they procure commercially. If you feel this is all a big
disinformation campaign, please feel free to give evidence for that. I
certainly won't exclude the possibility, but I find it unlikely.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography