Re: [Cryptography] Is DNSSEC is really the right solution? [djb video]

2013-09-09 Thread Paul Wouters

On Sun, 8 Sep 2013, Daniel Cegiełka wrote:


Subject: Re: [Cryptography] Opening Discussion: Speculation on BULLRUN



http://www.youtube.com/watch?v=K8EGA834Nok

Is DNSSEC is really the right solution?


That is the most unprofessional talk I've seen djb give. He bluffed a
bunch of fanboys with no knowledge of DNSSEC that it was bad. His claims
about caching, amplification, etc were completely wrong, as Kaminsky and I
spend pointing out in the days after that CCC talk.

http://dankaminsky.com/2011/01/05/djb-ccc/
http://dankaminsky.com/2011/01/07/cachewars/

He seems to mostly egage in DNSSEC bashing to advertise his curve25519,
dnscurve and his curve25519 the entire internet ideas.

The easiest number to debunk was the DNS cache hit rate. The day after
his talk I collected statistics from the CCC event itself, A large Dutch
ISP and one of the largest American ISPs, and the numbers were above 80%
at minimum and close to 99% for the dns cache at the CCC itself.

His suggestion to pollute port 53 with non-DNS traffic, and to kill DNS
data authentication and replace it with transport-only security have
always been rejected by the community at large as insane. His proposal
to DDOS all DNS servers by making them perform crypto isn't very
realistic for deployments either.

DNSSEC is the result of a lot of fundamental design goals such as 100%
backwards compatibility, data authenticity, offline crypto signing,
crypto agility, not bypassing the cache infrastructure, etc etc.

Do I trust curve25519 more then the NIST curves? Yes I do. Do I think
djb should design internet protocols. No.

DNSSEC is a very secure and reasonable compromise for all the
requirements various parties had to secure the DNS. If you believe that
is not the case, please speak out with verifiable technical arguments,
and not with video hype. And I'll gladly take the time to explain
things.

Paul
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-09 Thread Christian Huitema
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 I am certainly not going to advocate Internet-scale KDC. But what
 if the application does not need to scale more than a network of 
 friends?

 A thousand times yes.

There is however a little fly in that particular ointment. Sure, we can develop 
system that manage pairwise keys, store them safely, share them between several 
user devices. But what about PFS? Someday, the pairwise key will be 
compromised, and the NSA will go back to the archives to decrypt everything. We 
could certainly devise a variant of DH that use the pairwise key to verify the 
integrity of the session keys, but that brings the public key technology back 
in the picture. Maybe I am just ignorant, but I don't know how to get PFS using 
just symmetric key algorithms. Does someone know better?

- -- Christian Huitema

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (MingW32)
Comment: Using gpg4o v3.1.107.3564 - http://www.gpg4o.de/
Charset: utf-8

iQEcBAEBAgAGBQJSLU6uAAoJELba05IUOHVQ32QH/jVt7j/FpZXc7G07fvfu8/ij
4h53Vn0dfNZmX+XLNX3yILizSz712bGEGWVnq7nPh1IB9JEbYu0lFJxzXbZB6Cv1
Owu+QKnJ1NgctggwKkaCwOELFPNEZ1amzu3f+Haxrq9knv/H2/mykpLPyRR0IU8T
8KFoud1rg7nffIW+flkEGVGgcExibjXOd8H7+/q6Mu6u4/aVJ4O3m2c1sv0kLhl3
gPIeoD8LlRBERUslkqF/jEv6PVgByLD8D94/f7wJ34e9RZQNILPH2dGdck02G/vK
IimsR7K/9cB0KhNnIIqCnmxYSvm7KU97h6ejm5lyyZPTtnoDPjfEU+0w7vl5uMs=
=ze/o
-END PGP SIGNATURE-

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread David Johnston

On 9/8/2013 4:27 AM, Eugen Leitl wrote:

- Forwarded message from James A. Donald jam...@echeque.com -

Date: Sun, 08 Sep 2013 08:34:53 +1000
From: James A. Donald jam...@echeque.com
To: cryptogra...@randombit.net
Subject: Re: [cryptography] Random number generation influenced, HW RNG
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 
Thunderbird/17.0.8
Reply-To: jam...@echeque.com

On 2013-09-08 3:48 AM, David Johnston wrote:

Claiming the NSA colluded with intel to backdoor RdRand is also to
accuse me personally of having colluded with the NSA in producing a
subverted design. I did not.

Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not
provide direct access to the raw unwhitened output.
#1 So that that state remains secret from things trying to discern that 
state for purposes of predicting past or future outputs of the DRBG.


#2 So that one thread cannot undermine a second thread by putting the 
DRNG into a broken mode. There is only one DRNG, not one per core or one 
per thread. Having one DRNG per thread would be one of the many 
preconditions necessary before this could be contemplated.


#3 Any method of access is going have to be documented and supported and 
maintained as a constant interface across many generations of chip. We 
don't throw that sort of thing into the PC architecture without a good 
reason.


 #4 Obviously there are debug modes to access raw entropy source 
output. The privilege required to access those modes is the same debug 
access necessary to undermine the security of the system. This only 
happens in very controlled circumstances.




A decision that even assuming the utmost virtue on the part of the
designers, leaves open the possibility of malfunctions going
undetected.

That's what BIST is for. It's a FIPS and SP800-90 requirement.


That is a question a great many people have asked, and we have not
received any answers.

Yes they have. I've answered this same question multiple times.


Access to the raw output would have made it possible to determine that
the random numbers were in fact generated by the physical process
described, since it is hard and would cost a lot of silicon to
simulate the various subtle offwhite characteristics of a well
described actual physical process.
Access to the raw output would have been a massive can of worms. The 
statistical properties of the entropy source are easy to model and easy 
to test online in hardware. They are described in the CRI paper if you 
want to read them. That's a necessary part of a good entropy source. If 
you can't build an effective online test in hardware then the entropy 
source is not fit for purpose.


DJ



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread Peter Gutmann
Phillip Hallam-Baker hal...@gmail.com writes:

People buy guns despite statistics that show that they are orders of
magnitude more likely to be shot with the gun themselves rather than by an
attacker.

Some years ago NZ abolished its offensive (fighter) air force (the choice was 
either to buy all-new, meaning refurbished, jets at a huge cost or abolish the 
capacity).  Lots of people got very upset about this, because it was leaving 
us defenceless.

(For people who are wondering why this position is silly, have a look at the
position of New Zealand on a world map.  The closest country with direct
access to us (in other words that wouldn't have to go through other countries
on the way here) is Peru, and they don't have any aircraft carriers).

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-09 Thread ianG

On 9/09/13 06:42 AM, James A. Donald wrote:

On 2013-09-09 11:15 AM, Perry E. Metzger wrote:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.


Real world experience is that there is nothing to worry about /if you do
it right/.  And that it is frequently not done right.

When you screw up AES or such, your test vectors fail, your unit test
fails, so you fix it, whereas if you screw up entropy, everything
appears to work fine.



Precisely.


It is hard, perhaps impossible, to have test suite that makes sure that
your entropy collection works.

One can, however, have a test suite that ascertains that on any two runs
of the program, most items collected for entropy are different except
for those that are expected to be the same, and that on any run, any
item collected for entropy does make a difference.

Does your unit test check your entropy collection?



When I audited the process for root key ceremony for CAcert, I worried a 
fair bit about randomness.  I decided the entropy was untestable 
(therefore unauditable).


So I wrote a process such that several people would bring their own 
entropy source.  E.g., in the one event, 3 sources were used, by 
independent people on independent machines:


  * I used a sha-stream of laptop camera on dark paper [0]
  * Teus used sound card driver [1]
  * OpenSSL's RNG.

The logic was that as long as one person was honest and had a good 
source, and as long as our mixing was verifiable, the result would be good.


Then, I wrote a small C program to mix it [2];  as small as possible so 
a room full of techies could spend no more than 10 minutes checking it 
on the day [3].


The output of this was then fed into the OpenSSL script to do the root 
key.  (I'm interested if anyone can spot a flaw in this concept.)




iang



[0] This idea from Jon Callas from memory, the idea is that the lack of 
light and lack of discrimination between pixels drives the photocells 
into a quantam uncertainty state.

[1] John Denker's sound card driver.
[2] As an amusing sidenote, I accidentally used | to mix the bytes not 
^.  My eyeball tests passed at 2 sources but at 3 sources it was 
starting to look decidedly wonky.
[3] It was discussed on the group at this time, it was advised that the 
output of the mix should be sha'd, which I eventually agreed with, but I 
don't think I did in the event.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-09 Thread ianG

Hi Jeffery,

On 8/09/13 02:52 AM, Jeffrey I. Schiller wrote:


The IETF was (and probably still is) a bunch of hard working
individuals who strive to create useful technology for the
Internet.



Granted!  I do not want to say that the IETF people are in a conspiracy 
with someone or each other, or that they are not hard workers [0].


But, I do want to say that, when it comes to security, we now have 
enough history and experience to suggest:


the committee may be part of the problem [1],

*and*

it is not clear that it can ever be part of the solution.

Insultingly;  those who've spent a decade or so devoting themselves to 
this process will not take to that notion kindly.  It's sad and 
frustrating -- I also spent a lot of time  money pushing OpenPGP code 
-- but that does not change the basic economic data we have in front of 
us.  In the 1990s we had little or no real data about Internet security. 
 Now we're 20 years on.  We have real data.




In particular IETF contributors are in theory individual
contributors and not representatives of their employers. Of course
this is the theory and practice is a bit “noisier”



The notion that employees are there as individuals is noble but 
unrealistic, naive.  That's to ignore business and politics, h/t to John 
Young.


Individuals without funded interests are rare, and tend to only be 
around for brief periods [2].  It is the case that the IETF has done 
better than other industry groups by insisting on open access and rough 
consensus [3].


But the IETF has done nothing to change the laws of economics:  Being on 
a committee costs a huge amount of time.  Only corporates who are 
engaged in making money off of the results can typically re-invest that 
money, and only individuals committed to working *that job* from 
corporates would spend that time on their own dime.


So, naturally, the corporates dominate the committees.  To argue 
anything else is to argue against economics, perhaps the strongest force 
in human nature.




but the bulk of
participant I worked with were honest hard working individuals.



There's nothing dishonest or lazy about defending ones job.



Security fails on the Internet for three important reasons, that have
nothing to do with the IETF or the technology per-se (except for point
3).

  1.  There is little market for “the good stuff”. When people see that
  they have to provide a password to login, they figure they are
  safe... In general the consuming public cannot tell the
  difference between “good stuff” and snake oil. So when presented
  with a $100 “good” solution or a $10 bunch of snake oil, guess
  what gets bought.



Although it is nicely logical and oft received wisdom, this is not 
historically supported.  Skype, SSH, Bitcoin, OTR, iMessage are 
successful security products.


There is clearly a market for good stuff but we the engineers don't 
see how to get there, and corporates don't either.  Putting us in a 
committee doesn't improve that, and probably makes it worse.




  2.  Security is *hard*, it is a negative deliverable. You do not know
  when you have it, you only know when you have lost it (via
  compromise).



2. counter-points in abundance:  transaction databases, protocols, 
monies, browsers, webservers, file sharing, p2p chats, office, 
languages, registries, source control, kernels, etc.  These are all 
hard.  We have a long list of projects and systems where we (the 
non-committee'd internet) have produced very difficult things.




  It is therefore hard to show return on investment
  with security. It is hard to assign a value to something not
  happening.



ROI:

a. it is hard to show quality at any points behind the screen.  The only 
things that are easy to show are pretty widgets on screens.  Everything 
else is hard.


b. I often show ROI models as to why security saves money.  (The model 
derives from support costs, if anyone doubts this.  Also, see Lynn 
Wheeler's discussion of credit card fees for the basic economics.)


Which is to say, the problems the net face in security are somewhat 
distinct from them being just hard  hard to show;  correlation maybe 
but causality?




  2a. Most people don’t really care until they have been personally
  bitten. A lot of people only purchase a burglar alarm after they
  have been burglarized. Although people are more security aware
  today, that is a relatively recent development.



2a., I agree!  I now feel bitten by Skype, and damn them to hell!




  3.  As engineers we have totally and completely failed to deliver
  products that people can use.



Right.  (It is a slow-moving nightmare moving all our people to OTR, 
which is dominated at the usability level by Skype.)




  I point out e-mail encryption as a
  key example. With today’s solutions you need to understand PK and
  PKI at some level in order to use it. That is likely requiring a
  driver to 

[Cryptography] The One True Cipher Suite

2013-09-09 Thread ianG

On 9/09/13 02:16 AM, james hughes wrote:


I am honestly curious about the motivation not to choose more secure modes that 
are already in the suites?


Something I wrote a bunch of years ago seems apropos, perhaps minimally 
as a thought experiment:




Hypothesis #1 -- The One True Cipher Suite


In cryptoplumbing, the gravest choices are apparently on the nature of 
the cipher suite. To include latest fad algo or not? Instead, I offer 
you a simple solution. Don't.


There is one cipher suite, and it is numbered Number 1.

Cypersuite #1 is always negotiated as Number 1 in the very first 
message. It is your choice, your ultimate choice, and your destiny. Pick 
well.


If your users are nice to you, promise them Number 2 in two years. If 
they are not, don't. Either way, do not deliver any more cipher suites 
for at least 7 years, one for each hypothesis.


   And then it all went to pot...

We see this with PGP. Version 2 was quite simple and therefore stable -- 
there was RSA, IDEA, MD5, and some weird padding scheme. That was it. 
Compatibility arguments were few and far between. Grumbles were limited 
to the padding scheme and a few other quirks.


Then came Versions 3-8, and it could be said that the explosion of 
options and features and variants caused more incompatibility than any 
standards committee could have done on its own.


   Avoid the Champagne Hangover

Do your homework up front.

Pick a good suite of ciphers, ones that are Pareto-Secure, and do your 
best to make the combination strong [1]. Document the short falls and do 
not worry about them after that. Cut off any idle fingers that can't 
keep from tweaking. Do not permit people to sell you on the marginal 
merits of some crazy public key variant or some experimental MAC thing 
that a cryptographer knocked up over a weekend or some minor foible that 
allows an attacker to learn your aunty's birth date after asking a 
million times.


Resist the temptation. Stick with The One.





http://iang.org/ssl/h1_the_one_true_cipher_suite.html
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] very little is missing for working BTNS in Openswan

2013-09-09 Thread Eugen Leitl

Just got word from an Openswan developer:


To my knowledge, we never finished implementing the BTNS mode.

It wouldn't be hard to do --- it's mostly just conditionally commenting out
code.


There's obviously a large potential deployment base for
BTNS for home users, just think of Openswan/OpenWRT.


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why are some protocols hard to deploy? (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread ianG

On 8/09/13 21:24 PM, Perry E. Metzger wrote:

On Sat, 07 Sep 2013 18:50:06 -0700 John Gilmore g...@toad.com wrote:

It was never clear to me why DNSSEC took so long to deploy,

[...]

PS:...


I believe you have answered your own question there, John. Even if we
assume subversion, deployment requires cooperation from too many
people to be fast.

One reason I think it would be good to have future key management
protocols based on very lightweight mechanisms that do not require
assistance from site administrators to deploy is that it makes it
ever so much easier for things to get off the ground. SSH deployed
fast because one didn't need anyone's cooperation to use it -- if you
had root on a server and wanted to log in to it securely, you could
be up and running in minutes.



It's also worth remembering that one reason the Internet succeeded was 
that it did not need the permission of the local telcos and the purchase 
of expensive ISO/OSI stuff from the IT companies in order to get up and 
going.


This lesson is repeated over and over again.  Eliminate permission, and 
win.  Insert multiple permission steps and lose.




We need to make more of our systems like that. The problem with
DNSSEC is it is so obviously architecturally correct but so
difficult to do deploy without many parties cooperating that it has
acted as an enormous tar baby.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] IETF: Security and Pervasive Monitoring

2013-09-09 Thread Eugen Leitl

http://www.ietf.org/blog/2013/09/security-and-pervasive-monitoring/

Security and Pervasive Monitoring

The Internet community and the IETF care deeply about how much we can trust
commonly used Internet services and the protocols that these services use.
So the reports about large-scale monitoring of Internet traffic and users
disturbs us greatly.  We knew of interception of targeted individuals and
other monitoring activities, but the scale of recently reported monitoring is
surprising. Such scale was not envisaged during the design of many Internet
protocols, but we are considering the consequence of these kinds of attacks.

Of course, it is hard to know for sure from current reports what attack
techniques may be in use.  As such, it is not so easy to comment on the
specifics from an IETF perspective.  Still, the IETF has some long standing
general principles that we can talk about, and we can also talk about some of
the actions we are taking.

In 1996, RFC 1984 articulated the view that encryption is an important tool
to protect privacy of communications, and that as such it should be
encouraged and available to all.  In 2002, we decided that IETF standard
protocols must include appropriate strong security mechanisms, and
established this doctrine as a best current practice, documented in RFC 3365.
Earlier, in 2000 the IETF decided not to consider requirements for
wiretapping when creating and maintaining IETF standards, for reasons stated
in RFC 2804. Note that IETF participants exist with positions at all points
of the privacy/surveillance continuum, as seen in the discussions that lead
to RFC 2804.

As privacy has become increasingly important, the Internet Architecture Board
(IAB) developed guidance for handling privacy considerations in protocol
specifications, and documented that in RFC 6973. And there are ongoing
developments in security and privacy happening within the IETF all the time,
for example work has just started on version 1.3 of the Transport Layer
Security (TLS, RFC 5246) protocol which aims to provide better
confidentiality during the early phases of the cryptographic handshake that
underlies much secure Internet traffic.

Recent days have also seen an extended and welcome discussion triggered by
calls for the IETF to build better protections against wide-spread
monitoring.

As that discussion makes clear, IETF participants want to build secure and
deployable systems for all Internet users.  Indeed, addressing security and
new vulnerabilities has been a topic in the IETF for as long as the
organisation has existed.  Technology alone is, however, not the only factor.
Operational practices, laws, and other similar factors also matter. First of
all, existing IETF security technologies, if used more widely, can definitely
help.  But technical issues outside the IETF’s control, for example endpoint
security, or the properties of specific products or implementations also
affect the end result in major ways. So at the end of the day, no amount of
communication security helps you if you do not trust the party you are
communicating with or the devices you are using. Nonetheless, we’re confident
the IETF can and will do more to make our protocols work more securely and
offer better privacy features that can be used by implementations of all
kinds.

So with the understanding of limitations of technology-only solutions, the
IETF is continuing its mission to improve security in the Internet.  The
recent revelations provide additional motivation for doing this, as well as
highlighting the need to consider new threat models.

We should seize this opportunity to take a hard look at what we can do
better.  Again, it is important to understand the limitations of technology
alone. But here are some examples of things that are already ongoing:

We’re having a discussion as part of the development of HTTP/2.0 as to how to
make more and better use of TLS, for example to perhaps enable clients to
require the use of security and not just have to react to the HTTP or HTTPS
URLs chosen by servers.

We’re having discussions as to how to handle the potentially new threat model
demonstrated by the recent revelations so that future protocol designs can
take into account potential pervasive monitoring as a known threat model.

We’re considering ways in which better use can be made of existing protocol
features, for example, better guidance as to how to deploy TLS with Perfect
Forward Secrecy, which makes applications running over TLS more robust if
server private keys later leak out.

We’re constantly updating specifications to deprecate older, weaker
cryptographic algorithms and allocate code points for currently strong
algorithm choices so those can be used with Internet protocols.

And we are confident that discussions on this topic will motivate IETF
participants to do more work on these and further related topics.

But don’t think about all this just in terms of the recent revelations.  The
security and 

Re: [Cryptography] Points of compromise

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 1:53 PM, Phillip Hallam-Baker wrote:

 I was asked to provide a list of potential points of compromise by a 
 concerned party. I list the following so far as possible/likely:
It's not clear to me what kinds of compromises you're considering.  You've 
produced a list of a number of possibilities, but not even mentioned whole 
classes of them - e.g., back doors in ECC.

I've expanded, however, on one element of your list.
 2) Covert channel in Cryptographic accelerator hardware.
 
 It is possible that cryptographic accelerators have covert channels leaking 
 the private key through TLS (packet alignment, field ordering, timing, etc.) 
 or in key generation (kleptography of the RSA modulus a la Motti Young). 
There are two sides to a compromise in accelerator hardware:  Grabbing the 
information, and exfiltrating it.  The examples you give - and much discussion, 
because its fun to consider such stuff - look at clever ways to exfiltrate 
stolen information along with the data it refers to.

However, to a patient attacker with large resources, a different approach is 
easier:  Have the planted hardware gather up keys and exfiltrate them when it 
can.  The attacker builds up a large database of possible keys - many millions, 
even billions, of keys - but still even an exhaustive search against that 
database is many orders of magnitude easier than an exhaustive search on an 
entire keyspace, and quite plausible - consider Venona.  In addition, the 
database can be searched intelligently based on spatial/temporal/organizational 
closeness to the message being attacked.

An attack of this sort means you need local memory in the device - pretty cheap 
these days, though of course it depends on the device - and some way of 
exfiltrating that data later.  There are many ways one might do that, from the 
high tech (when asked to encrypt a message with a particular key, or bound to a 
particular target, instead encrypt - with some other key - and send - to some 
other target - the data to be exfiltrated) to low (pay someone with physical 
access to plug a USB stick into the device periodically).

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Kristian Gjøsteen

9. sep. 2013 kl. 10:45 skrev Eugen Leitl eu...@leitl.org:
 Forwarded without permission, hence anonymized:
 
 Hey, I had a look at SEC2 and the TLS/SSH RFCs. SSH uses secp256/384r1
 which has the same parameters as what's in SEC2 which are the same the
 parameters as specified in SP800-90 for Dual EC DRBG!
 TLS specifies you can use those two curves as well...
 Surely that's not coincidence..
 

The curves are standard NIST curves. They were the curves you used until about 
now. That they are the same everywhere is no surprise.

The problem with Dual-EC-DRBG was that a point that should have been 
generated verifiably at random was not generated verifiably at random. There's 
no reason to believe it wasn't, but it was a stupid mistake that should not 
have been made, and that has now been blown out of all proportion. Users, if 
there are any, should generate their own points verifiably at random.

If you reuse one or more points from Dual-EC-DRBG as generators in other 
standards, it is of no matter. Even if the points are carefully chosen, they 
cannot compromise those other standards. (DLOG is essentially independent of 
the generator.)

There's no reason to be paranoid, just because the NSA is out to get you.

-- 
Kristian Gjøsteen



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 11:41 PM, james hughes wrote:
 In summary, it would appear that the most viable solution is to make
 I don't see how it's possible to make any real progress within the existing 
 cloud model, so I'm with you 100% here.  (I've said the same earlier.)
 Could cloud computing be a red herring? Banks and phone companies all give up 
 personal information to governments (Verizon?) and have been doing this long 
 before and long after cloud computing was a fad
It's a matter of context.  For data I'm deliberately sharing with some company 
- sure, cloud is fine.  As I mentioned elsewhere, if the NSA wants to spend 
huge resources to break in to my purchasing transactions with Amazon, I may 
care as a citizen that they are wasting their money - but as a personal matter, 
it's not all that much of a big deal, as that information is already being 
gathered, aggregated, bought, and sold on a mass basis.  If they want to know 
about my buying habits and financial transactions, Axciom can sell them all 
they need for a couple of bucks.

On the other hand, I don't want them recording my chats or email or phone 
conversations.  It's *that* stuff that is out in the cloud these days, and as 
long as it remains out there in a form that someone other than I and those I'm 
communicating with can decrypt, it's subject to attacks - attacks so pervasive 
that I don't see how you could ever built a system (technical or legal) to 
protect against them.  The only way to win is not to play.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Alexander Klimov
On Mon, 9 Sep 2013, Daniel wrote:
 Is there anyone on the lists qualified in ECC mathematics that can
 confirm that? 

NIST SP 800-90A, Rev 1 says:

 The Dual_EC_DRBG requires the specifications of an elliptic curve and 
 two points on the elliptic curve. One of the following NIST approved 
 curves with associated points shall be used in applications requiring 
 certification under [FIPS 140]. More details about these curves may 
 be found in [FIPS 186], the Digital Signature Standard.

 And what ramifications it has, if any..

No. They are widely used curves and thus a good way to reduce 
conspiracy theories that they were chosen in some malicious way to 
subvert DRBG.

-- 
Regards,
ASK
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 8:37 PM, James A. Donald wrote:
 Your magic key must then take any block of N bits and magically
 produce the corresponding plaintext when any given ciphertext
 might correspond to many, many different plaintexts depending
 on the key
 Suppose that the mappings from 2^N plaintexts to 2^N ciphertexts are not 
 random, but rather orderly, so that given one element of the map, one can 
 predict all the other elements of the map.
 
 Suppose, for example the effect of encryption was to map a 128 bit block to a 
 group, map the key to the group, add the key to the block, and map back
Before our current level of understanding of block ciphers, people actually 
raised - and investigated - the question of whether the DES operations formed a 
group.  (You can do this computationally with reasonable resources.  The answer 
is that it isn't.)  I don't think anyone has repeated the particular experiment 
with the current crop of block ciphers; but then I expect the details of their 
construction, and the attacks they are already explicitly built to avoid, would 
rule out the possibility.  But I don't know.

Stepping back, what you are considering is the possibility that there's a 
structure in the block cipher such that if you have some internal information, 
and you have some collection of plaintext/ciphertext pairs with respect to a 
given key, you can predict other (perhaps all) such pairs.  This is just 
another way of saying there's a ciphertext/known plaintext/chosen plaintext/ 
chosen ciphertext attack, depending on your assumptions about how that 
collection of pairs must be created.  That it's conveniently expressible as 
some kind of mathematical structure on the mappings generated by the cipher for 
a given key is neither here nor there.

Such a thing would contradict everything we think we know about block ciphers. 
Sure, it *could* happen - but I'd put it way, way down the list of possibles.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES state of the art...

2013-09-09 Thread Alexander Klimov
On Sun, 8 Sep 2013, Perry E. Metzger wrote:
 What's the current state of the art of attacks against AES? Is the
 advice that AES-128 is (slightly) more secure than AES-256, at least
 in theory, still current?

I am not sure what is the exact attack you are talking about, but I 
guess you misunderstood the result that says: the attack works 
against AES-256, but not against AES-128 as meaning that AES-128 is 
more secure. It can be the case that to break AES-128 the attack needs 
2^240 time, while to break AES-256 it needs 2^250 time. Here AES-128 
is not technically broken, since 2^240  2^128, but AES-256 is broken, 
since 2^250  2^256, OTOH, AES-256 is still more secure against the 
attack.

-- 
Regards,
ASK
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-09 Thread Jerry Leichter
On Sep 8, 2013, at 6:49 PM, Phillip Hallam-Baker wrote:
 ...The moral is that we have to find other market reasons to use security. 
 For example simplifying administration of endpoints. I do not argue like some 
 do that there is no market for security so we should give up, I argue that 
 there is little market for something that only provides security and so to 
 sell security we have to attach it to something they want
Quote from the chairman of a Fortune 50 company to a company I used to work 
for, made in the context of a talk to the top people at that company*:  I 
don't want to buy security products.  I want to buy secure products.

This really captures the situation in a nutshell.  And it's a conundrum for all 
the techies with cool security technologies they want to sell.  Security isn't 
a product; it's a feature.  If there is a place in the world for companies 
selling security solutions, it's as suppliers to those producing something that 
fills some other need - not as suppliers to end users.

-- Jerry

*It's obvious from public facts about me that the company receiving this word 
of wisdom was EMC; but I'll leave the other company anonymous.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Thoughts about keys

2013-09-09 Thread Guido Witmond
Hi Perry,

I just came across your message [0] on retrieving the correct key for a
name. I believe that's called Squaring Zooko's Triangle.

I've come up with my ideas and protocol to address this need.
I call it eccentric-authentication. [1,2]

With Regards, Guido.



0: http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

1:
http://eccentric-authentication.org/blog/2013/08/31/the-holy-grail-of-cryptography.html

2:
http://eccentric-authentication.org/eccentric-authentication/global_unique_secure.html



signature.asc
Description: OpenPGP digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] A Likely Story!

2013-09-09 Thread Alexander Klimov
On Sun, 8 Sep 2013, Peter Fairbrother wrote:
 On the one hand, if they continued to recommend that government people use
 1024-bit RSA they could be accused of failing their mission to protect
 government communications.
 
 On the other hand, if they told ordinary people not to use 1024-bit RSA, they
 could be accused of failing their mission to spy on people.
 
 What to do?

NIST recommends at least RSA-2048 for a long time, for example NIST 
Special Publication 800-57, back in August, 2005 said:

 [...] for Federal Government unclassified applications. A minimum of 
 eighty bits of security shall be provided until 2010. Between 2011 
 and 2030, a minimum of 112 bits of security shall be provided. 
 Thereafter, at least 128 bits of security shall be provided.

Note that

 RSA-1024 ~ 80 bits of security; 
 RSA-2048 ~ 112 bits; 
 RSA-3072 ~ 128 bits 

So if anyone to blame for using 1024-bit RSA, it is not NIST.

BTW, once you realize that 256 bits of security requires RSA with 
15360 bits, you will believe conspiracy theories about ECC much less. 
Here exponentiation with 15360 bits takes 15^3=3375 times more CPU 
time than a 1024-bit exponentiation, thus using RSA for 256-bit 
security is impractical.

 You can use any one of trillions of different elliptic curves,which should be
 chosen partly at random and partly so they are the right size and so on; but
 you can also start with some randomly-chosen numbers then work out a curve
 from those numbers. and you can use those random numbers to break the session
 key setup.

Can you elaborate on how knowing the seed for curve generation can be 
used to break the encryption? (BTW, the seeds for randomly generated 
curves are actually published.)

-- 
Regards,
ASK
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Eugen Leitl

Forwarded without permission, hence anonymized:


Hey, I had a look at SEC2 and the TLS/SSH RFCs. SSH uses secp256/384r1
which has the same parameters as what's in SEC2 which are the same the
parameters as specified in SP800-90 for Dual EC DRBG!
TLS specifies you can use those two curves as well...
 Surely that's not coincidence..




signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-09 Thread Nap van Zuuren
The article of der Spiegel in english can be found on:

http://www.spiegel.de/international/world/privacy-scandal-nsa-can-spy-on-sma
rt-phone-data-a-920971.html

 

and an update ( in English ) will be added today.

 

-Oorspronkelijk bericht-
Van: cryptography-bounces+nap.van.zuuren=pandora...@metzdowd.com
[mailto:cryptography-bounces+nap.van.zuuren=pandora...@metzdowd.com] Namens
Christian Huitema
Verzonden: maandag 9 september 2013 6:22
Aan: 'Jerry Leichter'; 'Perry E. Metzger'
CC: cryptography@metzdowd.com
Onderwerp: Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

 

-BEGIN PGP SIGNED MESSAGE-

Hash: SHA1

 

 Apparently this was just a teaser article.  The following is apparently
the full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't
tell  for sure - it's the German original, and my German is non-existent.

 

The high level summary is that phones contain a great deal of interesting
information, that they can target IPhone and Android phone, and that after
some pretty long efforts they can hack the Blackberry too. Bottom line, get
a Windows Phone...

 

- -- Christian Huitema

-BEGIN PGP SIGNATURE-

Version: GnuPG v2.0.20 (MingW32)

Comment: Using gpg4o v3.1.107.3564 - http://www.gpg4o.de/

Charset: utf-8

 

iQEcBAEBAgAGBQJSLUz0AAoJELba05IUOHVQTvUH/2XXo92DcMKpWUQ/8q4dg8BY

4B+/ytLy8tpBH33lT+u1yTpnLH/OV0h6mQdIusMun94JugGlJiePe0yC6zcsEE+s

OgU1SNdvqRoc5whTiV6ZIMfoOakyzeLPonS+gZ6hOWBLjQf52JNVHE4ERWTOK5un

iymLK36wTFqHceF6+iVrJEwaYEvLURpUB2U3dghC5OJyQzf5yqCvdYP18iStz2WT

woSJikGps2dS7eV6vPtkqhar5EWXHpPPAYwZbDskuMx10Y8Z8ET+HTFAw5rV3d3L

925adBWQLjR73wpANRyH85LtsK6nJlJzW0D1IMBmFyOqKZsOxjZQ75dAyi4oE+o=

=/S/b

-END PGP SIGNATURE-

 

___

The cryptography mailing list

cryptography@metzdowd.com

http://www.metzdowd.com/mailman/listinfo/cryptography

 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Scott Aaaronson: NSA: Possibly breaking US laws, but still bound by laws of computational complexity

2013-09-09 Thread Eugen Leitl

http://www.scottaaronson.com/blog/?p=1517

NSA: Possibly breaking US laws, but still bound by laws of computational
complexity

Last week, I got an email from a journalist with the following inquiry.  The
recent Snowden revelations, which made public for the first time the US
government’s “black budget,” contained the following enigmatic line from the
Director of National Intelligence: “We are investing in groundbreaking
cryptanalytic capabilities to defeat adversarial cryptography and exploit
internet traffic.”  So, the journalist wanted to know, what could these
“groundbreaking” capabilities be?  And in particular, was it possible that
the NSA was buying quantum computers from D-Wave, and using them to run
Shor’s algorithm to break the RSA cryptosystem?

I replied that, yes, that’s “possible,” but only in the same sense that it’s
“possible” that the NSA is using the Easter Bunny for the same purpose.  (For
one thing, D-Wave themselves have said repeatedly that they have no interest
in Shor’s algorithm or factoring.  Admittedly, I guess that’s what D-Wave
would say, were they making deals with NSA on the sly!  But it’s also what
the Easter Bunny would say.)  More generally, I said that if the open
scientific world’s understanding is anywhere close to correct, then quantum
computing might someday become a practical threat to cryptographic security,
but it isn’t one yet.

That, of course, raised the extremely interesting question of what
“groundbreaking capabilities” the Director of National Intelligence was
referring to.  I said my personal guess was that, with ~99% probability, he
meant various implementation vulnerabilities and side-channel attacks—the
sort of thing that we know has compromised deployed cryptosystems many times
in the past, but where it’s very easy to believe that the NSA is ahead of the
open world.  With ~1% probability, I guessed, the NSA made some sort of big
improvement in classical algorithms for factoring, discrete log, or other
number-theoretic problems.  (I would’ve guessed even less than 1% probability
for the latter, before the recent breakthrough by Joux solving discrete log
in fields of small characteristic in quasipolynomial time.)

Then, on Thursday, a big New York Times article appeared, based on 50,000 or
so documents that Snowden leaked to the Guardian and that still aren’t
public.  (See also an important Guardian piece by security expert Bruce
Schneier, and accompanying QA.)  While a lot remains vague, there might be
more public information right now about current NSA cryptanalytic
capabilities than there’s ever been.

So, how did my uninformed, armchair guesses fare?  It’s only halfway into the
NYT article that we start getting some hints:

The files show that the agency is still stymied by some encryption, as Mr.
Snowden suggested in a question-and-answer session on The Guardian’s Web site
in June.

“Properly implemented strong crypto systems are one of the few things that
you can rely on,” he said, though cautioning that the N.S.A. often bypasses
the encryption altogether by targeting the computers at one end or the other
and grabbing text before it is encrypted or after it is decrypted…

Because strong encryption can be so effective, classified N.S.A. documents
make clear, the agency’s success depends on working with Internet companies —
by getting their voluntary collaboration, forcing their cooperation with
court orders or surreptitiously stealing their encryption keys or altering
their software or hardware…

Simultaneously, the N.S.A. has been deliberately weakening the international
encryption standards adopted by developers. One goal in the agency’s 2013
budget request was to “influence policies, standards and specifications for
commercial public key technologies,” the most common encryption method.

Cryptographers have long suspected that the agency planted vulnerabilities in
a standard adopted in 2006 by the National Institute of Standards and
Technology and later by the International Organization for Standardization,
which has 163 countries as members.

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered
by two Microsoft cryptographers in 2007, was engineered by the agency. The
N.S.A. wrote the standard and aggressively pushed it on the international
group, privately calling the effort “a challenge in finesse.”

So, in pointing to implementation vulnerabilities as the most likely
possibility for an NSA “breakthrough,” I might have actually erred a bit too
far on the side of technological interestingness.  It seems that a large part
of what the NSA has been doing has simply been strong-arming Internet
companies and standards bodies into giving it backdoors.  To put it bluntly:
sure, if it wants to, the NSA can probably read your email.  But that isn’t
mathematical cryptography’s fault—any more than it would be mathematical
crypto’s fault if goons broke into your house and carted away your laptop.
On the contrary, 

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-09 Thread Kent Borg

On 09/08/2013 11:56 PM, Jerry Leichter wrote:

Which brings into the light the question:  Just *why* have so many random 
number generators proved to be so weak.


Your three cases left off an important one: Not bothering to seed the 
PRNG at all.  I think the Java/Android cryptographic (!) library bug 
that just came up was an instance of that.


I think the root of the problem is that programs are written, and bugs 
squashed, until the program works. Maybe throw some additional testing 
at it if we are being thorough, but then business pressures and boredom 
says ship it.


That won't catch a PRNG that wasn't seeded, nor a hashed password that 
wasn't salted, the unprotected URL, the SQL injection path, buffer 
overflow, etc.


Computer security is design, implementation, and skepticism.  But unless 
you can sell it with a buzzword...



-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] The One True Cipher Suite

2013-09-09 Thread Phillip Hallam-Baker
On Mon, Sep 9, 2013 at 3:58 AM, ianG i...@iang.org wrote:

 On 9/09/13 02:16 AM, james hughes wrote:

  I am honestly curious about the motivation not to choose more secure
 modes that are already in the suites?


 Something I wrote a bunch of years ago seems apropos, perhaps minimally as
 a thought experiment:



 Hypothesis #1 -- The One True Cipher Suite


 In cryptoplumbing, the gravest choices are apparently on the nature of the
 cipher suite. To include latest fad algo or not? Instead, I offer you a
 simple solution. Don't.

 There is one cipher suite, and it is numbered Number 1.

 Cypersuite #1 is always negotiated as Number 1 in the very first message.
 It is your choice, your ultimate choice, and your destiny. Pick well.

 If your users are nice to you, promise them Number 2 in two years. If they
 are not, don't. Either way, do not deliver any more cipher suites for at
 least 7 years, one for each hypothesis.

And then it all went to pot...

 We see this with PGP. Version 2 was quite simple and therefore stable --
 there was RSA, IDEA, MD5, and some weird padding scheme. That was it.
 Compatibility arguments were few and far between. Grumbles were limited to
 the padding scheme and a few other quirks.

 Then came Versions 3-8, and it could be said that the explosion of options
 and features and variants caused more incompatibility than any standards
 committee could have done on its own.

Avoid the Champagne Hangover

 Do your homework up front.

 Pick a good suite of ciphers, ones that are Pareto-Secure, and do your
 best to make the combination strong [1]. Document the short falls and do
 not worry about them after that. Cut off any idle fingers that can't keep
 from tweaking. Do not permit people to sell you on the marginal merits of
 some crazy public key variant or some experimental MAC thing that a
 cryptographer knocked up over a weekend or some minor foible that allows an
 attacker to learn your aunty's birth date after asking a million times.

 Resist the temptation. Stick with The One.



Steve Bellovin has made the same argument and I agree with it.
Proliferation of cipher suites is not helpful.

The point I make is that adding a strong cipher does not make you more
secure. Only removing the option of using weak ciphers makes you more
secure.

There are good reasons to avoid MD5 and IDEA but at this point we are very
confident of AES and SHA3 and reasonably confident of RSA.

We will need to move away from RSA at some point in the future. But ECC is
a mess right now. We can't trust the NIST curves any more and the IPR
status is prohibitively expensive to clarify.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread Ben Laurie
Perry asked me to summarise the status of TLS a while back ... luckily I
don't have to because someone else has:

http://tools.ietf.org/html/draft-sheffer-tls-bcp-00

In short, I agree with that draft. And the brief summary is: there's only
one ciphersuite left that's good, and unfortunately its only available in
TLS 1.2:

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-09 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Just to throw in my two cents...

In the early 1990’s I wanted to roll out an encrypted e-mail solution
for the MIT Community (I was the Network Manager and responsible for
the mail system). We already had our Kerberos Authentication system
(of which I am one of the authors, so I have a special fondness for
it). It would do a fine job of helping people exchange session keys
for mail and everyone at MIT has a Kerberos ID (and therefore would
permit communication between everyone in the community).

However, as Network Manager, I was also the person who would see legal
requests for access to email and other related data. Whomever ran the
Kerberos KDC would be in a position to retrieve any necessary keys to
decrypt any encrypted message. Which meant that whomever ran the KDC
could be compelled to turn over the necessary keys. In fact my fear
was that a clueless law enforcement organization would just take the
whole KDC with a search warrant, thus compromising everyone’s
security. Today they may well also use a search warrant to take the
whole KDC, but not because they are clueless...

The desire to offer privacy protection that I, as the administrator,
could not defeat is what motivated me to look into public key systems
and eventually participate in the Internet’s Privacy Enhanced Mail
(PEM) efforts. By using public key algorithms, correspondents are
protected from the prying eyes of even the folks who run the system.

I don’t believe you can do this without using some form of public key
system.

-Jeff
–
___
Jeffrey I. Schiller
Information Services and Technology
Massachusetts Institute of Technology
77 Massachusetts Avenue  Room E17-110A, 32-392
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@mit.edu
http://jis.qyv.name
___



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSLhgY8CBzV/QUlSsRAoQ8AKDBC/y/qph+HpE11a+5d7p6a6DqyQCgiN/f
3Dcsr8wLR1H+J9gzz31n4ys=
=84A0
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES state of the art...

2013-09-09 Thread Tony Arcieri
On Sun, Sep 8, 2013 at 3:33 PM, Perry E. Metzger pe...@piermont.com wrote:

 What's the current state of the art of attacks against AES? Is the
 advice that AES-128 is (slightly) more secure than AES-256, at least
 in theory, still current?


No. I assume that advice comes from related key attacks on AES, and Bruce
Schneier's blog posts about them:

https://www.schneier.com/blog/archives/2009/07/new_attack_on_a.html
https://www.schneier.com/blog/archives/2009/07/another_new_aes.html

For some reason people read these blog posts and thought, for whatever
reason, that Schneier recommends AES-128 over AES-256. However, that is not
the case. Here's a relevant page from Schneier's book Cryptography
Engineering in which he recommends AES-256 (or switching to an algorithm
without known attacks):

https://pbs.twimg.com/media/BEvLoglCcAAqg4E.jpg

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES state of the art...

2013-09-09 Thread Perry E. Metzger
On Mon, 9 Sep 2013 14:18:41 +0300 Alexander Klimov
alser...@inbox.ru wrote:
 On Sun, 8 Sep 2013, Perry E. Metzger wrote:
  What's the current state of the art of attacks against AES? Is the
  advice that AES-128 is (slightly) more secure than AES-256, at
  least in theory, still current?
 
 I am not sure what is the exact attack you are talking about, but I 
 guess you misunderstood the result that says: the attack works 
 against AES-256, but not against AES-128 as meaning that AES-128
 is more secure. It can be the case that to break AES-128 the attack
 needs 2^240 time, while to break AES-256 it needs 2^250 time. Here
 AES-128 is not technically broken, since 2^240  2^128, but AES-256
 is broken, since 2^250  2^256, OTOH, AES-256 is still more secure
 against the attack.
 

There is a related key attack against AES-256 that breaks it in order
2^99.5, far worse than 2^250!

However, several people seem to have assured me (in private email)
that they think such related key attacks are not important in
practice.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] ADMIN: traffic levels

2013-09-09 Thread Perry E. Metzger
List traffic levels are very high right now.

Although the current situation is worrisome to many of us, the list
becomes less useful to all when it becomes so clogged with posts that
it becomes impossible for any reasonable person to read it.

I and the co-moderators are probably going to start being much more
strict about content until things settle down. Do not be surprised or
offended when you get a rejection -- it is nothing personal.

Some rules of thumb:

SHORT BEATS LONG: Don't ramble, get to the point, avoid unnecessary
asides, trim back what you're quoting to the minimum. This is
especially important in replies on long threads.

IMPORTANT BEATS TRIVIAL: The more real, interesting, and new content,
the more likely we are to forward it.

DON'T BE REDUNDANT: If you already said something a couple of times in
a thread, don't repeat it endlessly. Fixing a clear misunderstanding
is okay, of course.

TECHNICAL BEATS POLITICAL: The list explicitly permits political
postings, but especially when the load is this high, they should
be informative and insightful. I'll almost always forward technical
cryptography and protocol discussion.

BARE LINKS ARE IRRITANTS: If you post a link to something, it should
explain clearly why someone might want to click. Even a sentence will
do.

TOP POSTING IS AN ABOMINATION BEFORE THE MODERATOR: ...and I am an
angry, old-testament sort of moderator, too.

Lastly:

I've had to ban two people in the last week for getting incredibly
insulting after I would not forward their postings. It should go
without saying that calling me a fascist, an NSA plant, etc. is
unlikely to alter my opinion of your posting in a positive way. There
are other, unmoderated forums (with even more volume) -- you know how
to find them if you like.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread Perry E. Metzger
First, David, thank you for participating in this discussion.

To orient people, we're talking about whether Intel's on-chip
hardware RNGs should allow programmers access to the raw HRNG output,
both for validation purposes to make sure the whole system is working
correctly, and if they would prefer to do their own whitening and
stretching of the output.

On Sun, 08 Sep 2013 21:40:34 -0700 David Johnston d...@deadhat.com
wrote:
  Well, since you personally did this, would you care to explain the
  very strange design decision to whiten the numbers on chip, and
  not provide direct access to the raw unwhitened output.

 #1 So that that state remains secret from things trying to discern
 that state for purposes of predicting past or future outputs of the
 DRBG.

That seems like a misguided rationale. In particular, given that
virtually all crypto software and existing kernels already have to
cope with hardware that does not provide this capability, it is
probably better that a hardware RNG not be a cryptographic
PRNG. It should be a source of actual hard-random bits that feed in
to the commonly used software mechanisms.

If you can't generate enough of them to satisfy all possible demand,
then I think it is architecturally far safer to allow software to
make the decision about how to stretch the scarcity, and in any case,
the software needs to exist anyway because other hardware does not
have the capability.

As it stands, the major consumers of your RNG, like the Linux kernel,
already end up mixing it in to a software RNG rather than implicitly
trusting it. It would be better to go further than this, I think.

A far greater concern than non-Intel engineers being bad at building
a random number generator in softare is that a fabrication flaw, a
post-manufacturing failure, or an intentional fabrication failure
induced by a paid agent would reduce the security of the system. It
is difficult to test such things as the system is constructed.

 #2 So that one thread cannot undermine a second thread by putting
 the DRNG into a broken mode. There is only one DRNG, not one per
 core or one per thread. Having one DRNG per thread would be one of
 the many preconditions necessary before this could be contemplated.

I think the same counterarguments hold. In any case, making it
impossible even for a privileged process like the kernel to test the
thing before returning it to its normal state seems like an
unfortunate choice.

 #3 Any method of access is going have to be documented and
 supported and maintained as a constant interface across many
 generations of chip. We don't throw that sort of thing into the PC
 architecture without a good reason.

There is, however, excellent reason here.

   #4 Obviously there are debug modes to access raw entropy source 
 output. The privilege required to access those modes is the same
 debug access necessary to undermine the security of the system.
 This only happens in very controlled circumstances.

Could you be more explicit about that?

Please note we are not asking this sort of thing out of malice. There
is now a document in wide circulation claiming multiple chip vendors
have had their crypto hardware compromised by intent.

Regardless of your own personal integrity, there are others inside
your organization that may very well be beneficiaries of the $250M a
year the NSA is now spending on undermining security. Indeed, were I
running that program, I would regard your group as a key target and
attempt to place someone inside it. Do you not agree that you're a
major vendor and that your hardware would be a very tempting target
for such a program, which we now know to exist?

 Access to the raw output would have been a massive can of worms.

And yet, you will note that many, many security types would prefer
raw output to a finished cryptographic random number source.

Intel could always provide a standard C routine to do the conversion
from the raw output into a suitable whitened and stretched output.

 The statistical properties of the entropy source are easy to model
 and easy to test online in hardware. They are described in the CRI
 paper if you want to read them.

But, forgive me for saying this, in an environment where the NSA
is spending $250M a year to undermine efforts like your own it is
impossible for third parties to trust black boxes any longer. I think
you may not have absorbed that what a week or two ago was a paranoid
fantasy turns out to be true.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread Hanno Böck
On Mon, 9 Sep 2013 17:29:24 +0100
Ben Laurie b...@links.org wrote:

 Perry asked me to summarise the status of TLS a while back ...
 luckily I don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's
 only one ciphersuite left that's good, and unfortunately its only
 available in TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

I don't really see from the document why the authors discourage
ECDHE-suites and AES-256. Both should be okay and we end up with four
suites:
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

Also, DHE should only be considered secure with a large enough modulus
(=2048 bit). Apache hard-fixes this to 1024 bit and it's not
configurable. So there even can be made an argument that ECDHE is more
secure - it doesn't have a widely deployed webserver using it in an
insecure way.


cu,
-- 
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42


signature.asc
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread james hughes

On Sep 9, 2013, at 9:29 AM, Ben Laurie b...@links.org wrote:

 Perry asked me to summarise the status of TLS a while back ... luckily I 
 don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's only one 
 ciphersuite left that's good, and unfortunately its only available in TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

+1 

I have read the document and it does not mention key lengths. I would suggest 
that 2048 bit is large enough for the next ~5? years or so. 2048 bit for both 
D-H and RSA. How are the key lengths specified? 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] auditing a hardware RNG

2013-09-09 Thread John Denker
On 09/05/2013 05:11 PM, Perry E. Metzger wrote:

  A hardware generator can have
 horrible flaws that are hard to detect without a lot of data from many
 devices. 

Can you be more specific?  What flaws?

On 09/08/2013 08:42 PM, James A. Donald wrote:

 It is hard, perhaps impossible, to have test suite that makes sure
 that your entropy collection works.

Yes, it's impossible, but that's the answer to the wrong
question.  See below.

On 09/08/2013 01:51 PM, Perry E. Metzger wrote:

 I'll repeat the same observation I've made a lot: Dorothy Denning's
 description of the Clipper chip key insertion ceremony described the
 keys as being generated deterministically using an iterated block
 cipher. I can't find the reference, but I'm pretty sure that when she
 was asked why, the rationale was that an iterated block cipher can be
 audited, and a hardware randomness source cannot.

Let's assume she actually said that.

-- The fact that she said it does not make it true.  That is,
 the fact that she didn't know how to do the audit does not 
 mean it cannot be done.
-- We agree that her claim has been repeated a lot.  However,
 repetition does not make it true.

So, if anybody still wants to claim a HRNG cannot be audited,
we have to ask:
 *) How do you know?
 *) How sure are you?
 *) Have you tried?
 *) The last time you tried, what went wrong?

=

Just to remind everybody where I'm coming from, I have been saying
for many many years that mere /testing/ is nowhere near sufficient 
to validate a RNG (hardware or otherwise).  You are welcome to do 
as much testing as you like, provided you keep in mind Dykstra's 
dictum:
   Testing can show the presence of bugs;
   testing can never show the absence of bugs.

As applied to the RNG problem:
   Testing can provide an upper bound on the entropy.
   What we need is a lower bound, which testing cannot provide.

If you want to know how much entropy there is a given source, we
agree it would be hard to measure the entropy /directly/.  So, as
Henny Youngman would say:  Don't do that.  Instead, measure three
physical properties that are easy to measure to high accuracy, and 
then calculate the entropy via the second law of thermodynamics.

You can build a fine hardware RNG using
 a) A physical source such as a resistor.
 b) A hash function.
 c) Some software to glue it all together.

I rank these components according to likelihood of failure
(under attack or otherwise) as follows:
  (c)  (b)  (a).
That is to say, the hardware part of the hardware RNG is the
/last/ thing I would expect to exhibit an undetectable failure.
If you want the next level of detail:
 a1) Electronic components can fail, but this is very unlikely
  and an undetectable failure is even more unlikely.  The
  computer has billions of components, only a handful of which
  are in the entropy-collecting circuit.  Failures can be
  detected.
 a2) The correctness of the second law of thermodynamics is 
  very much better established than the correctness of any
  cryptologic hash.
 b) The hash in a HRNG is less likely to fail than the hash
  in a PRNG, because we are placing milder demands on it.
 c) The glue in the HRNG can be audited in the same way as 
  in any other random number generator.

Furthermore, every PRNG will fail miserably if you fail to
seed it properly.  This is a verrry common failure mode.
You /need/ some sort of HRNG for seeding.  Anybody who uses 
deterministic means to obtain random numbers is, of course, 
living in sin.  (John von Neumann)

  Tangential remark:  As for the Clipper key ceremony,
  that doesn't increase the credibility of anybody 
  involved.  I can think of vastly better ways of
  generating trusted, bias-proof, tamper-proof keys.

Bottom line: As H.E. Fosdick would say: 
  Just because you find somebody who doesn't know 
  how to do the audit doesn't mean it cannot be done.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-09 Thread Salz, Rich
➢  then maybe it's not such a silly accusation to think that root CAs are 
routinely distributed to multinational secret
➢  services to perform MITM session decryption on any form of communication 
that derives its security from the CA PKI.

How would this work, in practice?  How would knowing a CA's private key give 
them knowledge of my key?  Or if they issued a fake certificate and keypair, 
how does that help?  They'd also have to suborn DNS and IP traffic such that it 
would, perhaps eventually or perhaps quickly, become obvious.

What am I missing?

/r$
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread James A. Donald

 would you care to explain the very strange design decision
 to whiten the numbers on chip, and not provide direct
 access to the raw unwhitened output.

On 2013-09-09 2:40 PM, David Johnston wrote:
 #1 So that that state remains secret from things trying to
 discern that state for purposes of predicting past or
 future outputs of the DRBG.

This assumes the DRGB is on chip, which it should not be.  It
should be in sofware.  Your argument is circular.  You are
arguing that the DRGB should be on chip, because it is on
chip, that is has some of its menacing characteristics
because it has other menacing characteristics.

 #2 So that one thread cannot undermine a second thread by
 putting the DRNG into a broken mode. There is only one
 DRNG, not one per core or one per thread. Having one DRNG
 per thread would be one of the many preconditions necessary
 before this could be contemplated.

You repeat yourself.  Same circular argument repeated.

 #3 Any method of access is going have to be documented and
 supported and maintained as a constant interface across
 many generations of chip.

Why then throw in RDSEED?

You are already adding RDSEED to RDRAND, which which fails to
address any of the complaints.  Why provide a DRNG in the
first place.

Answer:  It is a NIST design, not an Intel design.  Your design
documents reference NIST specifications. And we
already know that NIST designs are done with hostile intent.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] how could ECC params be subverted other evidence

2013-09-09 Thread Perry E. Metzger
On Tue, 10 Sep 2013 00:23:51 +0200 Adam Back a...@cypherspace.org
wrote:
 On Mon, Sep 09, 2013 at 06:03:14PM -0400, Perry E. Metzger wrote:
 On Mon, 9 Sep 2013 14:07:58 +0300 Alexander Klimov wrote:
  No. They are widely used curves and thus a good way to reduce
  conspiracy theories that they were chosen in some malicious way
  to subvert DRBG.
 
 Er, don't we currently have documents from the New York Times and
 the Guardian that say that in fact they *did* subvert them?
 
 From what I could see it was more like people are taking more
 seriously the criticism that they could have subverted the curves
 because the published parameter generation seeds are big hex
 strings (rather than the typical literaly quote or digits of pi),
 and therefore there is no way to verify the parameters were chosen
 fairly.

The Times reported that a standard from about the right time period
that had been criticized in a 2007 paper by some researchers at
Microsoft (who reported a backdoor) had been subverted, and there had
been much internal congratulation in a memorandum. The only such
standard was apparently the one in question.

This is no longer speculation, we now know that they seem to have
done this.

This was only an example, the context in the Guardian and the Times
made it clear others are probably lurking.

As I've said before, a week ago I would have called the entire idea
paranoia. Now, the evidence has changed. When the facts change, I
change my mind.

 Relatedly it seems to me that backdooring is a tricky business,
 especially if you care about plausible deniability, and about
 actual security in the face of blackhats or other state actors who
 may rediscover the sabotaged parameters, design, code, master keys
 in the binary etc and exploit it rather than publish it and have it
 fixed by the vendor.

I think you're hardly the only person to note that this is a very
dangerous game they've played, in some cases literally endangering
people's lives.

 Presumably the reverse engineering deities are warming up their
 softICE to pore over the windows and other OS crypto code.

And, I would imagine, people are probably ripping apart popular
hardware crypto implementations, decapping the chips, and
photographing them as we speak. The memoranda spoke of hardware
crypto systems being subverted.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts about keys

2013-09-09 Thread Peter Fairbrother

On 09/09/13 13:08, Guido Witmond wrote:

Hi Perry,

I just came across your message [0] on retrieving the correct key for a
name. I believe that's called Squaring Zooko's Triangle.

I've come up with my ideas and protocol to address this need.
I call it eccentric-authentication. [1,2]

With Regards, Guido.



0: http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

1:
http://eccentric-authentication.org/blog/2013/08/31/the-holy-grail-of-cryptography.html

2:
http://eccentric-authentication.org/eccentric-authentication/global_unique_secure.html


I like to look at it the other way round, retrieving the correct name 
for a key.


You don't give someone your name, you give them an 80-bit key 
fingerprint. It looks something like m-NN4H-JS7Y-OTRH-GIRN. The m- is 
common to all, it just says this is one of that sort of hash.


There is only one to remember, your own.

The somebody uses the fingerprint in a semi-trusted (eg trusted not to 
give your email to spammers, but not trusted as far as giving the 
correct key goes) reverse lookup table, which is published and shared, 
and for which you write the entry and calculate the fingerprint by a 
long process to make say 20 bits more work.


Your entry would have your name, key, address, company, email address, 
twitter tag, facebook page, telephone number, photo, religious 
affiliation, claimed penis size, today's signed ephemeral DH or ECDHE 
keypart, and so on - whatever you want to put in it.


He then checks that you are someone he thinks you are, eg from the 
photo, checks the fingerprint, and if he wants to contact you he has 
already got your public key.


He cannot contact you without also getting your public key first - 
because you haven't given him your email address, just the hash.



[ That's what's planned for m-o-o-t (a CD-based live OS plus for 
secure-ish comms) anyway. As well, in m-o-o-t you can't contact anyone 
without checking the fingerprint, and you can't contact him in 
unencrypted form at all. Also the lookup uses a PIR system to avoid 
traffic analysis by lookup. It isn't available just now, so don't ask. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-09 Thread Andreas Davour

 From: Eugen Leitl eu...@leitl.org

Forwarded with permission.
[snip]
 http://hack.org/mc/projects/btns/


So there *is* a BTNS implementation, after all. Albeit
only for OpenBSD -- but this means FreeBSD is next, and
Linux to follow.

I might add that as far as I know, this work has not been picked up yet by 
neither FreeBSD, nor Linux, so if you feel like giving the project a hand 
pushing it into the mainstream, I'm pretty sure mc would be very happy. I.e. I 
don't think anything is following on this work unless someone reading this 
helps making that happen. Personally I have neither the skills nor the contacts 
needed.


/andreas
--
My son has spoken the truth, 
and he has sacrificed more than either the president of the United 
States or Peter King have ever in their political careers or their 
American lives. So how they choose to characterize him really doesn't 
carry that much weight with me. -- Edward Snowden's Father 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-09 Thread Salz, Rich
   *  NSA employees participted throughout, and occupied leadership roles
  in the committee and among the editors of the documents

 Slam dunk.  If the NSA had wanted it, they would have designed it themselves. 
  The only
 conclusion for their presence that is rational is to sabotage it [3].

No.  One mission of the NSA is to protect US government secrets. Since the 
government can no longer afford to specify their own security products all the 
time (or rather that the computer market has become commoditized), the NSA has 
an interest in making standard COTS products be secure.

I do not know if the NSA worked to subvert IETF specifications, but 
participation isn't proof of it.

/r$

   Flaming Carrot!...  Do
you see Communists behind
every bush?
 No... but SOMETIMES they
  hide there.


--  
Principal Security Engineer
Akamai Technology
Cambridge, MA
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] A Likely Story!

2013-09-09 Thread Peter Fairbrother

On 09/09/13 12:53, Alexander Klimov wrote:

On Sun, 8 Sep 2013, Peter Fairbrother wrote:


You can use any one of trillions of different elliptic curves,which should be
chosen partly at random and partly so they are the right size and so on; but
you can also start with some randomly-chosen numbers then work out a curve
from those numbers. and you can use those random numbers to break the session
key setup.


Can you elaborate on how knowing the seed for curve generation can be
used to break the encryption? (BTW, the seeds for randomly generated
curves are actually published.)




Move along please, there is nothing to see here.

This is just a wild and disturbing story. It may upset you to read it, 
so please stop reading now.


You may have read a bit about the story in the papers or internet or 
elsewhere, but isn't actually true. Government Agencies do not try to 
break the internet's encryption, as used by Banks and Doctors and 
Commerce and Government Departments and even Government Agencies 
themselves - that wouldn't be sensible.


Besides which, there is no such agency as the NSA.


But ..

Take FIPS P-256 as an example. The only seed which has been published is 
s=  c49d3608 86e70493 6a6678e1 139d26b7 819f7e90 (the string they hashed 
and mashed in the process of deriving c).


I don't think they could reverse the perhaps rather overly-complicated 
hashing/mashing process, but they could certainly cherry-pick the s 
until they found one which gave a c which they could use.


c not being one of the usual parameters for an elliptic curve, I should 
explain that it was then used as c = a^3/b^2 mod p.


However the choice of p, r, a and G was not seeded, and the methods by 
which those were chosen are opaque.



I don't really know enough about ECC to say whether a perhaps 
cherry-picked c = a^3/b^2 mod p is enough that the resulting curve is 
secure against chosen curve attacks - but it does seem to me that there 
is a whole lot of legroom between a cherry-picked c and the final curve.




And as I said, it's only a story. We don't know much about what the NSA 
knows about chosen curve attacks, although we do know that they are 
possible. Don't go believing it, it will just upset you.


They wouldn't do that.


-- Peter Fairbrother

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Seed values for NIST curves

2013-09-09 Thread Tony Arcieri
On Mon, Sep 9, 2013 at 10:37 AM, Nemo n...@self-evident.org wrote:

 The approach appears to be an attempt at a nothing up my sleeve
 construction. Appendix A says how to start with a seed value and use SHA-1
 as a psuedo-random generator to produce candidate curves until a suitable
 one is found.


The question is... suitable for what? djb argues it could be used to find a
particularly weak curve, depending on what your goals are:

http://i.imgur.com/o6Y19uL.png

(originally from http://www.hyperelliptic.org/tanja/vortraege/20130531.pdf)

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread Owen Shepherd
 -Original Message-
 From: cryptography-bounces+owen.shepherd=e43...@metzdowd.com
 [mailto:cryptography-bounces+owen.shepherd=e43...@metzdowd.com]
 On Behalf Of David Johnston
 Sent: 09 September 2013 05:41
 To: cryptography@metzdowd.com
 Subject: Re: [Cryptography] [cryptography] Random number generation
 influenced, HW RNG
 
 #1 So that that state remains secret from things trying to discern that
state
 for purposes of predicting past or future outputs of the DRBG.
 
 #2 So that one thread cannot undermine a second thread by putting the
 DRNG into a broken mode. There is only one DRNG, not one per core or one
 per thread. Having one DRNG per thread would be one of the many
 preconditions necessary before this could be contemplated.
 
 #3 Any method of access is going have to be documented and supported and
 maintained as a constant interface across many generations of chip. We
don't
 throw that sort of thing into the PC architecture without a good reason.
 
   #4 Obviously there are debug modes to access raw entropy source output.
 The privilege required to access those modes is the same debug access
 necessary to undermine the security of the system. This only happens in
very
 controlled circumstances.

There are lots of aspects of IA-32/AMD64 which aren't consistent across
generations. The power management interface, for example, tends to get
somewhat infrequent backwards incompatible tweaks.

Fundamentally, I don't think anybody would have complained if you provided
some potentially non-stable method of /reading/ the RNG state; for example,
a bunch of MSRs (Hell, the potential instability is there in the name:
_Model_specific_, as much as a misnomer that is for the majority of stuff
dumped into an MSR) which could read the state wouldn't be out of the
question.

Plus, there it is: the required security protections. The only pieces of
software which can read MSRs are the kernel and SMM. If either of those is
compromised, well, you're boned anyway.

Some way of reading the raw RNG output, and establishing that things are
working as they should? That would give a lot of confidence

Also, you could have made rdrand set CF or similar if its state could be
predictable due to a recent read of the DRNG's state or similar.

-BEGIN PGP MESSAGE-
Version: GnuPG v2.0.21 (MingW32)

owGdVmtsFUUULigNVPABEX9o6BASHuH2UmgxUhEo1FLQUmyJDTakzt2dvTvt7sw6
M9vLQqI/JCBqVHzEGAMGI4I/MCUiCSgGCUHxQWLA1MYYqII2waAmJCoS4jmz996W
+M8mt72zO3POd77vO2f60qSbKsaOefu9fQemPju2MObLS7mK9ppvDy4hNfjTpnie
CxqQVqY1zTP7cFLVEtKsZNhAHJVERuYVjfykJidj4TA9VxaYyGqfRT5T7gOsvi7L
4mUhM5tcWXCzjgzxfFdIeWBkw/+LsAFDtAmynPk08EibR5poH3fJaukLbaTA1x1M
mAZSuwi+RIaFOabIgtr5daR2YUP9fNywTt5YwH8wdsS5HuZAkHbWQLpWjNq6gXQ5
NyzbqXBlSERs8+SZYIoangLhwgtiBoW5GdLSSdrXrMSn+Jkxn3RIYnxq0l/aUMOI
YsCN0EQzRzFDPGAaXnOR18SoBP4SI4nLtcOUGHUOA3pSkShWkdRME+mRSDGXOwbP
RFQbAq+92MSKERmbKDZ2k/EZaWpfvjJbhrWgDEsKBl8Uoy5xqBDSkFi4TIUcnlNE
KIVb2pBLILexySAkBmqCWqF8gEtJTsleJkgoXZYl60BYRjikF0Fik+DWDMEEuIqA
REciTIVrjIWP0kRZ0gJiQ5bSuVHvSEHGAUBh9mWxuJCKxIZQFi9HYTQRzEFPqwR2
e5gLONaQtXgedoJrogCYdUeYqSONIiFgFF+6GJ46GAQryUuE5NM+hvJAAFc6cQge
ZC4BcxAdR5FUxRXGQpENfPCJBoIgIegoDBLGlEcdYNhREqIj/lGesqI5Po+ypBPT
iFkG4wEBslD0AyRKi0dMVgDkYe0KQhUcNGBq9ECBQxmxgdx5CeUAf1qKcq2EzKgn
bbk+LmMNIhkrGYWPy3Jx3gqpsdQiBYoWCFSrZJRA/lg5JY/ZgCA40M/7eMDy6PAn
Yg7WHHUckGhWDMq1hatpWEqWbsJAI6rB2REv2v3MiRU3SUl2nWhQEM1WMppPo4gB
f1yQPqasJ1BmJYMAwDhcgWKoAaQA1JOq1pVrDmTaK1RHQJ79uqqxpm7BvMbWpnvr
ScHnjo8bQQsrJIfUIGVRwFHaZVMqYMIp1BVGKnpkRPOM7WG2kYL1YAFRXMtynqGs
ISugvjBRkEI8mKNOb4EqF4uCsRVBllwAfBQY7U2LaAaWKCahQZBkyKrUMdYbveDF
JCfdpNg21r0YJUh9yT2SyBiEkzBcYY0AADuWxjEa9KuoAcIw40hPzMNGBOPNsypg
f9r5dP+NlcFEgGHv44HWjnZNZrewIMjYI+UMUBNG5wGqmroCx4awuwQU1UC6W8Ey
QTfKwj3udGewmcIY1cCmCrkWAFqlfQEhEEM6E3pkySzaxJ5H3DiMsGY7rgSCmlPU
NZ0JdrxYX9kpbRlDInPW6CXTgSoahbbUrw1inSmhxvQNdk/Z/mXHAsPYlCMGsXaN
OJrdIpSeKaAPi4AAn4VjmaMq9X8v3AcssMOmo7U1S1Z5hHFMnmLD/rIDLoRswAte
RwXLOWg8C2LkpJ1Fx7pEUqCJLaADBYcFRiiqmlYAzY7Cph2esTmZNQLXfrqJmtKl
ZXFL1YvPqRURJoSP9C2FWmFfar4878M7JZCWS2giDzwHrYg4GgMtLc6iFtaoIXUB
iavsdIX2WNGM14XmIQ+oQu9yaNRUrPJUL16I1rFubEc1hcocbCXLaPk+XLNyVun0
SNTs9jH33FwxZmxF5bix+F9SRdWE20v/Ou0+PL7i/ZOTf95ywq/8dPquc+7J8eS2
2XOeOjv+m+P9H4fjxu+bPnB64mtXZz4UdN+yZ8+EaYsvH9/xxw8HZjbWnh4+++af
Xy3+5Vpbzc7HTq3Yf5Rt7x88NrRxOB545sUH5645P/HHt+oHn26a3H/qRM+lqfcN
DW9rca/c+vqVV/euv2tz8+r+i3d3fLA3f/3cjsyj7kdDnw0f2PTqw69UVcYv7Ny9
6uSZlw9OGxg+NHT5i8WP7CDLtv302+bjucd7sjOGf/9rad8RTx6tvtpZOfBG9a+r
71x/eP+FubP52U/OLNxy8cPTU7YOdn7+XfU/7/QPbv363BHxnDr6/CE+ZfK7xy7d
sfX4ovOZ609e+/v75Xurdw1VtfIL1f8C
=YDgw
-END PGP MESSAGE-

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Seed values for NIST curves

2013-09-09 Thread Nemo
I have been reading FIPS 186-3 (
http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf) and 186-4 (
http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf), particularly
Appendix A describing the procedure for generating elliptic curves and
Appendix D specifying NIST's recommended curves.

The approach appears to be an attempt at a nothing up my sleeve
construction. Appendix A says how to start with a seed value and use SHA-1
as a psuedo-random generator to produce candidate curves until a suitable
one is found. Appendix D includes the seed value for each curve so that
anyone can verify they were generated according to the pseudo-random
process described in Appendix A.

Unless NSA can invert SHA-1, the argument goes, they cannot control the
final curves.

However...

To my knowledge, most nothing up my sleeve constructions use clearly
non-random seed values. For example, MD5 uses the sines of consecutive
integers. SHA-1 uses sqrt(2), sqrt(3), and similar.

Using random seeds just makes it look like you wanted to try a few -- or
possibly a great many -- until the result had some undisclosed property you
wanted.

Question: Who chose the seeds for the NIST curves, and how do they claim
those seeds were chosen, exactly?

 - Nemo
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Perry E. Metzger
On Mon, 9 Sep 2013 14:07:58 +0300 Alexander Klimov
alser...@inbox.ru wrote:
 On Mon, 9 Sep 2013, Daniel wrote:
  Is there anyone on the lists qualified in ECC mathematics that can
  confirm that? 
 
 NIST SP 800-90A, Rev 1 says:
 
  The Dual_EC_DRBG requires the specifications of an elliptic curve
 and two points on the elliptic curve. One of the following NIST
 approved curves with associated points shall be used in
 applications requiring certification under [FIPS 140]. More details
 about these curves may be found in [FIPS 186], the Digital
 Signature Standard.
 
  And what ramifications it has, if any..
 
 No. They are widely used curves and thus a good way to reduce 
 conspiracy theories that they were chosen in some malicious way to 
 subvert DRBG.
 

Er, don't we currently have documents from the New York Times and the
Guardian that say that in fact they *did* subvert them?

Yes, a week ago this was paranoia, but now we have confirmation, so
it is no longer paranoia.

-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Some protection against replay attacks

2013-09-09 Thread Faré
Reading about several attacks based on partial message replay, I was
wondering if the following idea had any worth, or maybe was already
widely used (sorry, I'm way behind in the literature):

the actual symmetric key to be used to encrypt the payload is the
hash of the shared secret, the time, and other public data.

Optionally, other public data can include information identifying
the two parties, to make active attacks harder, as well as nonces sent
by either or both parties, and sequential numbers preventing reuse
within the window, etc.

This means that protocol attacks are now restricted to a smaller
window (say, TCP timeout of 5 minute), in either the time range that
active attacks can be conducted, or that the passive data can be
decrypted. i.e. that's automated rekeying, in a way that almost
guarantees the same key is never used twice.

Depending on the protocol, the server can be trusted to broadcast and
communicate its time with some coarse grain, and the client just uses
its NTP time as a guess. The server can accept the proposed client's
time if within an acceptable window, or override it with its time,
that the client can deny if in paranoid mode — in which case there is
a DoS attack possible if NTP is subverted.

—♯ƒ • François-René ÐVB Rideau •ReflectionCybernethics• http://fare.tunes.org
Reason isn't about not having prejudices,
it's about having (appropriate) postjudices. — Faré
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Peter Fairbrother

On 09/09/13 23:03, Perry E. Metzger wrote:


On Mon, 9 Sep 2013, Daniel wrote:
[...] They are widely used curves and thus a good way to reduce
conspiracy theories that they were chosen in some malicious way to
subvert DRBG.



Er, don't we currently have documents from the New York Times and the
Guardian that say that in fact they *did* subvert them?

Yes, a week ago this was paranoia, but now we have confirmation, so
it is no longer paranoia.


I did not see that, and as far as I can tell there is no actual 
confirmation.



Also, the known possible subversion of DRBG did not involve curve 
selection, but selection of a point to be used in DRBG. I think Kristian 
G has posted about that.





As to elliptic curves, there are only two of significance, in terms of 
being widely used:  they are NIST P-256 and NIST P-384.


NIST P-224 is also occasionally used.

These are the same curves as the secp256/384r1 curves, and the same 
curves as almost any other 256-bit or 384-bit curves you might want to 
mention - eg the FIPS 186-3 curves, and so on.


These are all the same curves.

They all began in 1999 as the curves in the (NIST) RECOMMENDED ELLIPTIC 
CURVES FOR FEDERAL GOVERNMENT USE


csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf‎


The way they were selected is supposed to be pseudo-random based on 
SHA-1, though it's actually not quite like that (or not even close).


Full details, or at least all of the publicly available details about 
the curve selection process, are in the link, but as I wrote earlier:



Take FIPS P-256 as an example. The only seed which has been published 
is s=  c49d3608 86e70493 6a6678e1 139d26b7 819f7e90 (the string they 
hashed and mashed in the process of deriving c).


I don't think they could reverse the perhaps rather overly-complicated 
hashing/mashing process, but they could certainly cherry-pick the s 
until they found one which gave a c which they could use.


c not being one of the usual parameters for an elliptic curve, I should 
explain that it was then used as c = a^3/b^2 mod p.


However the choice of p, r, a and G was not seeded, and the methods by 
which those were chosen are opaque.


I don't really know enough about ECC to say whether a perhaps 
cherry-picked c = a^3/b^2 mod p is enough to ensure that the resulting 
curve is secure against chosen curve attacks - but it does seem to me 
that there is a whole lot of wiggle room between a cherry-picked c and 
the final curve.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] SSH uses secp256/384r1 which has the same parameters as what's in SEC2 which are the same the parameters as specified in SP800-90 for Dual EC DRBG!

2013-09-09 Thread Perry E. Metzger
On Tue, 10 Sep 2013 00:25:20 +0100 Peter Fairbrother
zenadsl6...@zen.co.uk wrote:
 On 09/09/13 23:03, Perry E. Metzger wrote:
 
  On Mon, 9 Sep 2013, Daniel wrote:
  [...] They are widely used curves and thus a good way to reduce
  conspiracy theories that they were chosen in some malicious way
  to subvert DRBG.
 
  Er, don't we currently have documents from the New York Times and
  the Guardian that say that in fact they *did* subvert them?
 
  Yes, a week ago this was paranoia, but now we have confirmation,
  so it is no longer paranoia.
 
 I did not see that, and as far as I can tell there is no actual 
 confirmation.

Quoting:

   Cryptographers have long suspected that the agency planted
   vulnerabilities in a standard adopted in 2006 by the National
   Institute of Standards and Technology and later by the
   International Organization for Standardization, which has 163
   countries as members.

   Classified N.S.A. memos appear to confirm that the fatal weakness,
   discovered by two Microsoft cryptographers in 2007, was engineered
   by the agency. The N.S.A. wrote the standard and aggressively
   pushed it on the international group, privately calling the effort
   “a challenge in finesse.”

http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all

This has generally been accepted to only match the NIST ECC RNG
standard, i.e. Dual_EC_DRBG, with the critique in question being
On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng
which may be found here: http://rump2007.cr.yp.to/15-shumow.pdf

Do you have an alternative theory?

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread Stephen Farrell

Hi Ben,

On 09/09/2013 05:29 PM, Ben Laurie wrote:
 Perry asked me to summarise the status of TLS a while back ... luckily I
 don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's only
 one ciphersuite left that's good, and unfortunately its only available in
 TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

I don't agree the draft says that at all. It recommends using
the above ciphersuite. (Which seems like a good recommendation
to me.) It does not say anything much, good or bad, about any
other ciphersuite.

Claiming that all the rest are no good also seems overblown, if
that's what you meant.

S.


 
 
 
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography
 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Points of compromise

2013-09-09 Thread John Gilmore
Phillip Hallam-Baker hal...@gmail.com wrote:
 5) Protocol vulnerability that IETF might have fixed but was discouraged
 from fixing.

By the way, it was a very interesting exercise to actually write out
on graph paper the bytes that would be sent in a TLS exchange.  I did
this with Paul Wouters while working on how to embed raw keys in TLS
(that would be authenticated from outside TLS, such as via DNSSEC).

Or, print out a captured TLS packet exchange, and try to sketch around
it what each bit/byte is for.  The TLS RFCs, unlike most Jon Postel
style RFCs, never show you the bytes -- they use a high level
description with separate rules for encoding those descriptions on
the wire.

There is a LOT of known plaintext in every exchange!

Known plaintext isn't the end of the world.  But it makes a great crib
for cryptanalysts who have some other angle to attack the system with.
Systems with more known plaintext are easier to exploit than those
with less.  Is that why TLS has more known plaintext than average?
Only the NSA knows for sure.

John


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread John Kelsey
On Sep 9, 2013, at 6:32 PM, Perry E. Metzger pe...@piermont.com wrote:

 First, David, thank you for participating in this discussion.
 
 To orient people, we're talking about whether Intel's on-chip
 hardware RNGs should allow programmers access to the raw HRNG output,
 both for validation purposes to make sure the whole system is working
 correctly, and if they would prefer to do their own whitening and
 stretching of the output.

Giving raw access to the noise source outputs lets you test the source from the 
outside, and there is alot to be said for it.  But I am not sure how much it 
helps against tampered chips.  If I can tamper with the noise source in 
hardware to make it predictable, it seems like I should also be able to make it 
simulate the expected behavior.  I expect this is more complicated than, say, 
breaking the noise source and the internal testing mechanisms so that the RNG 
outputs a predictable output stream, but I am not sure it is all that much more 
complicated.  How expensive is a lightweight stream cipher keyed off the time 
and the CPU serial number or some such thing to generate pseudorandom bits?  
How much more to go from that to a simulation of the expectdd behavior, perhaps 
based on the same circutry used in the unhacked version to test the noise 
source outputs?  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread james hughes

On Sep 9, 2013, at 2:49 PM, Stephen Farrell stephen.farr...@cs.tcd.ie wrote:

 On 09/09/2013 05:29 PM, Ben Laurie wrote:
 Perry asked me to summarise the status of TLS a while back ... luckily I
 don't have to because someone else has:
 
 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
 In short, I agree with that draft. And the brief summary is: there's only
 one ciphersuite left that's good, and unfortunately its only available in
 TLS 1.2:
 
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
 
 I don't agree the draft says that at all. It recommends using
 the above ciphersuite. (Which seems like a good recommendation
 to me.) It does not say anything much, good or bad, about any
 other ciphersuite.
 
 Claiming that all the rest are no good also seems overblown, if
 that's what you meant.


I retract my previous +1 for this ciphersuite. This is hard coded 1024 DHE 
and 1024bit RSA. 

From 
http://en.wikipedia.org/wiki/Key_size
 As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
 strength to 80-bit symmetric keys

80 bit strength. Hard coded key sizes. Nice. 

AES 128 with a key exchange of 80 bits. What's a factor of 2^48 among friends…. 

additionally, as predicted in 2003… 
 1024-bit keys are likely to become crackable some time between 2006 and 2010 
 and that
 2048-bit keys are sufficient until 2030.
 3072 bits should be used if security is required beyond 2030

They were off by 3 years.

What now? ___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography