Re: [Cryptography] Der Spiegel: "NSA Can Spy on Smart Phone Data"

2013-09-08 Thread Christian Huitema
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> Apparently this was just a "teaser" article.  The following is apparently the 
> full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't tell > 
> for sure - it's the German original, and my German is non-existent.

The high level summary is that phones contain a great deal of interesting 
information, that they can target IPhone and Android phone, and that after some 
pretty long efforts they can hack the Blackberry too. Bottom line, get a 
Windows Phone...

- -- Christian Huitema
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (MingW32)
Comment: Using gpg4o v3.1.107.3564 - http://www.gpg4o.de/
Charset: utf-8

iQEcBAEBAgAGBQJSLUz0AAoJELba05IUOHVQTvUH/2XXo92DcMKpWUQ/8q4dg8BY
4B+/ytLy8tpBH33lT+u1yTpnLH/OV0h6mQdIusMun94JugGlJiePe0yC6zcsEE+s
OgU1SNdvqRoc5whTiV6ZIMfoOakyzeLPonS+gZ6hOWBLjQf52JNVHE4ERWTOK5un
iymLK36wTFqHceF6+iVrJEwaYEvLURpUB2U3dghC5OJyQzf5yqCvdYP18iStz2WT
woSJikGps2dS7eV6vPtkqhar5EWXHpPPAYwZbDskuMx10Y8Z8ET+HTFAw5rV3d3L
925adBWQLjR73wpANRyH85LtsK6nJlJzW0D1IMBmFyOqKZsOxjZQ75dAyi4oE+o=
=/S/b
-END PGP SIGNATURE-

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 9:15 PM, Perry E. Metzger wrote:
>> I don't see the big worry about how hard it is to generate random 
>> numbers unless:
> 
> Lenstra, Heninger and others have both shown mass breaks of keys based
> on random number generator flaws in the field. Random number
> generators have been the source of a huge number of breaks over time.
> 
> Perhaps you don't see the big worry, but real world experience says
> it is something everyone else should worry about anyway.
Which brings into the light the question:  Just *why* have so many random 
number generators proved to be so weak.  If we knew the past trouble spots, we 
could try to avoid them, or at least pay special care to them during reviews, 
in the future.

I'm going entirely off of memory here and a better, more data-driven approach, 
might be worth doing, but I can think of three broad classes of root causes of 
past breaks:

1.  The designers just plain didn't understand the problem and used some 
obvious - and, in retrospect, obviously wrong - technique.  (For example, they 
didn't understand the concept of entropy and simply fed a low-entropy source 
into a whitener of some kind - often MD5 or SHA-1.  The result can *look* 
impressively random, but is cryptographically worthless.)

2.  The entropy available from the sources used was much less, at least in some 
circumstances (e.g., at startup) than the designers assumed.

3.  The code used in good random sources can look "strange" to programmers not 
familiar with it, and may even look buggy.  Sometimes good generators get 
ruined by later programmers who "clean up the code".

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-08 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/8/13 1:51 PM, Perry E. Metzger wrote:
> On Sun, 8 Sep 2013 14:50:07 -0400 Jerry Leichter
>  wrote:
>> Even for one-to-one discussions, these days, people want 
>> transparent movement across their hardware.  If I'm in a chat 
>> session on my laptop and leave the house, I'd like to be able to 
>> continue on my phone.  How do I hand off the conversation - and
>> the keys?
> 
> I wrote about this a couple of weeks ago, see:
> 
> http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html
>
>  In summary, it would appear that the most viable solution is to
> make the end-to-end encryption endpoint a piece of hardware the
> user owns (say the oft mentioned $50 Raspberry Pi class machine on
> their home net) and let the user interact with it over an encrypted
> connection (say running a normal protocol like Jabber client to
> server protocol over TLS, or IMAP over TLS, or https: and a web
> client.)

Yes, that is a possibility. Personally I'm still mulling over whether
we'd want your little home device to be a Jabber server (typically
requiring a stable IP address or an FQDN), a standard Jabber client
connected to some other server (which might be a personal server at
your VPS or a small-scale server for friends and family), or something
outside of XMPP entirely that merely advertises its reachability via
some other protocol over Jabber (in its vCard or presence information).

> It is a compromise, but one that fits with the usage pattern
> almost everyone has gotten used to. It cannot be done with the
> existing cloud model, though -- the user needs to own the box or we
> can't simultaneously maintain current protocols (and thus current
> clients) and current usage patterns.

I very much agree.

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSLQUgAAoJEOoGpJErxa2p9NUP/3R2p37pupeFB3GV5NJt1sN9
kOO+P9TXO8Ra3WXeQcNcwe43tVfpKlJIbHa9tZbs5Mvl6F2TSqChTxZ2ftS178Ul
QAhX3SuztbPr7LUjROmmwLBVHr9k06LMVjSM4B5XFk3uGV+5IrTfpRkBLH7UB7vh
9mA21Zu/tGjUNPZBbHJIqXHhHMFTS4ewUznEwr4vT87xVkcG2yJ385IF/6Q22a1u
n6hWuLPcWwABROIXRhZ/wDafEKnchUGiAICiGpAjd6Ngrc3gzvsOGPjcIdFS9sO8
SWO1W+AJQi6HlcnMrmlmlRJL/pBkQbOvV97/VozOKmwdP7a6LZ+OcRkpyy4HrV2C
5KBvYrl66G/G6WaWF9juRbjSvQLhpJ6CkSJ0vwfttCfI2oTmAGo/+d/L1V6Pdmv5
RYWoON6wyHTOTmvmewEcjHGzTKgae+u4BcbzZND1vpaoN4Wo5eXWQ5NkAUzK1INY
NIz4kORhnHsGOfy8SCKV7WO6JQHFzFc7hZMZ8y0VkfozVK1N0IJRxPblWynI/wo6
xy3WtCWvAmCmDL0fm0SdVC3K85hJFD2kbPQWoqyKPq700PjE4/WJyL4/0Eu2cYa5
m9rB/vM5Cdkrv9LEJtZjQ7Ro0flV21P+rr2iZXVSXPVbzuj4K4oRGihcXwD9E/B7
+o+v/Ckzamfi1fpawnDk
=ICV8
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Peter Gutmann
Ralph Holz  writes:

>I've followed that list for a while. What I find weird is that there should
>be much dissent at all. This is about increasing security based on adding
>quite well-understood mechanisms. What's to be so opposed to there?

There wasn't really much dissent (there was some discussion, both on and off-
list, which I've tried to address in updates of the draft), it's just that the
WG chairs don't seem to want to move on it.

>Does adding some ciphersuites really require an extension, maybe even on the
>Standards Track? I shouldn't think so, looking at the RFCs that already do
>this, e.g. RFC 5289 for AES-GCM. Just go for an Informational. FWIW, even
>HTTPS is Informational.

I've heard from implementers at Large Organisations that having it non-
standards-track makes it hard to get it adopted there.  I guess I could go for
Informational if all else fails.

>I don't think it hurts to let users and operators vote with their feet here.

That's what's already happened/happening, problem is that without an RFC to
nail down at least the extension ID it's a bit hard for commercial vendors to
commit to it.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/7/13 9:06 PM, Christian Huitema wrote:
>> Pairwise shared secrets are just about the only thing that
>> scales worse than public key distribution by way of PGP key
>> fingerprints on business cards.  > The equivalent of CAs in an
>> all-symmetric world is KDCs.  Instead of having the power to
>> enable an active attack on you today, KDCs have the power to
>> enable a passive attack on you forever.  If we want secure crypto
>> that can be used by everyone, with minimal trust, public key is
>> the only way to do it.
>> 
> 
> I am certainly not going to advocate Internet-scale KDC. But what
> if the application does not need to scale more than a "network of 
> friends?"

A thousand times yes.

One doesn't need to communicate with several billion people, and we
don't need systems that scale up that high. Most folks just want to
interact (chat, share photos, voice/video conference, etc.) with their
friends and family and colleagues -- maybe 50 - 500 people. IMHO we
only need to scale up that high for secure communication. (I'm talking
about individual communication, not enterprise stuff.)

What about talking with someone new? Well, we can design separate
protocols that enable you to be introduced to someone you haven't
communicated with before (we already do that with things like FOAF,
LinkedIn, Facebook). Part of that introduction might involve learning
the new person's public key from someone you already trust (no need
for Internet-scale certificate authorities). You could use that public
key for bootstrapping the pairwise shared secrets.

Another attractive aspect of a network of friends is that it can be
used for mix networking (route messages through your friends) and for
things like less-than-completely-public media relays and data proxies
for voice, video, file transfer, etc. And such relays might just live
on those little home devices that Perry is talking about, separate
from the cloud.

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSLQDNAAoJEOoGpJErxa2phHAQAJ76DfrFmz6Sv+HkczOgxJA1
v0kqmLphDhzgT/9eUiF1cCkowF0HE1l84DTuMefrwT2DmOLZJVQANy0Tg/CzWLRu
3JBDkPRQ/cdlfDyy1ZHNb4bsGWyxHIXViQg2sNQZ9KB8yRF4pouYewXOpoJDIabN
G40mVlWzuO5cTUWLColwDCaoR20Q+04Ln19BAiJi58d2UT4c55ZyF45hbbQSYL7T
bl1JQkvZdtp2Syn4DaGS+WmCUIGsv5KpdXmZv0ljKXoRqsOW7GjaiaQz84MMMQg9
EHZIDnAetTXdfbEki8AsO5PlGRmi944tHL7DtvXJKd76CY5dIZ6kywMU2g+/LrIn
1uWwTSogu4n4yiQrLyYfOnsttkzJWC9BE9YJXXeH0IN6VRvkC710zphCZLVw6LZJ
TsNvtskigIQ9jnPO1le1zkHIagXHhns6fVTURFuWd9ZHCOOdbNT7h6Lj+I8OGCkp
KFAbRfXzAQDZgVrl42IZ8Sn4DioCLGbscP3maU/C8J3s1+ega3lxfX3DNbJpX+id
FtnaXHfushv9xIkoNT/sBJrg79BblU5ZOH/GUBMwV+rFlWA0ofvIrhkaSnRUPFTI
gq2C913YWQfyybolHKRNsZ/JpYjarZAJ5eJdW9ALo3xrCxlTr/EcIek7hCVKBK1o
d7FvIpkYoexTO08AKfcZ
=GRXj
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: "NSA Can Spy on Smart Phone Data"

2013-09-08 Thread Jerry Leichter
Apparently this was just a "teaser" article.  The following is apparently the 
full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't tell for 
sure - it's the German original, and my German is non-existent.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Kent Borg

On 09/08/2013 09:15 PM, Perry E. Metzger wrote:
Perhaps you don't see the big worry, but real world experience says it 
is something everyone else should worry about anyway.


I overstated it.

Good random numbers are crucial, and like any cryptography, exact 
details matter.  Programmers are constantly making embarrassing 
mistakes.  (The recent Android RNG bug, was that Sun, Oracle, or Google?)


But there is no special reason to worry about corrupted HW RNGs because 
one should not be using them as-is, there are better ways to get good 
random data, ways not obvious to a naive civilian, but still well known.


Snowden reassured us when he said that good cryptography is still good 
cryptography.  If that includes both hashes and cyphers, then the 
fundamental components of sensible hybrid RNGs are sound.


Much more worrisome is whether Manchurian Circuits have been added to 
any hardware, no matter its admitted purpose, just waiting to be activated.


-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread Max Kington
This space is of particular interest to me.  I implemented just one of
these and published the protocol (rather than pimp my blog if anyone wants
to read up on the protocol description feel free to email me and I'll send
you a link).

The system itself was built around a fairly simple PKI which then allowed
people to build end-to-end channels.  You hit the nail on the head though,
control of the keys.  If you can game the PKI you can replace someone's
public key and execute a MITM attack.  The approach I took to this was that
the PKI publishes peoples public keys but then allows other users to verify
your public key.  A MITM attack is possible but as soon as your public key
is rotated this is detected and the client itself asks if you'd like to
verify if out of band (this was for mobile devices so it lends itself to
having other channels to check keys via, like phone your friend and ask
them).  The much more likely thing is where someone tries to do a MITM
attack for just a particular user but as the channels are tunnelled end to
end they need to essentially ask the PKI to publish two duff keys, i.e. one
in each direction, Alice's key as far as Bob is concerned and Bob's key as
far as alice is concerned..  In turn the two people who's traffic the
attacker is trying to obtain can in turn ask someone else to double check
their.  It means that you need to publish an entirely fake PKI directory to
just two users.  The idea was the alarm bells go off when it transpires
that every person you want to get a proxy verification of a public key via
has 'all of a sudden' changed their public key too.  It's a hybrid model, a
PKI to make life easy for the users to bootstrap but which uses a web of
trust to detect when the PKI (or your local directory) has been attacked.
 Relationships become 'public' knowledge at least in so far as you ask
others in your address book to verify peoples public keys (all be it via
uuids, you could still find out if your mate Bill had 'John's' public key
in his address book because he's asked you to verify it for him).  So for
those who want to protect the conversational meta data it's already
orthogonal to that.

Group chat semantics are quite feasible in that all users are peers but you
run into difficulty when it comes to signing your own messages, not that
you can't sign them but that's computationally expensive and the eats
battery life.  Again, you are right though, what do you want to achieve?

I certainly built a protocol that answered the main questions I was asking!

As for multiple devices, the trick was always usability.  How do you
securely move an identity token of some description from one node to
another.  I settled on every device having its own key pair but you still
need an 'owning' identity and a way to 'enrol' a new key pair because if
that got broken the attacked just enrols their own 'device'
surreptitiously.  You then get into the realms of passwords through salted
hashing algorithms but then you're back to the security of a password being
brute forced.  If you were really paranoid I proposed a smart card
mechanism but I've yet to implement that (how closed a world are smart
cards with decent protection specifications?! but that's another
conversation), the idea being that you decrypt your device key pair using
the smart card and ditch the smart card if needs be, through a typical
office shredder.

Silent Circle was one of the most analogous systems but I'm an amateur
compared to those chaps.  As interesting as it was building, it kept
boiling down to one thing: Assuming I'd done a good job all I had done was
shift the target from the protocol to the device.

If I really wanted to get the data I'd attack the onscreen software
keyboard and leave everything else alone.

Max


On Sun, Sep 8, 2013 at 7:50 PM, Jerry Leichter  wrote:

> On Sep 7, 2013, at 11:16 PM, Marcus D. Leech wrote:
> > Jeff Schiller pointed out a little while ago that the crypto-engineering
> community have largely failed to make end-to-end encryption easy to use.
>  There are reasons for that, some technical, some political, but it is
> absolutely true that end-to-end encryption, for those cases where "end to
> end" is the obvious and natural model, has not significantly materialized
> on the Internet.  Relatively speaking, a handful of crypto-nerds use
> end-to-end schemes for e-mail and chat clients, and so on, but the vast
> majority of the Internet user-space?  Not so much.
> I agree, but the situation is complicated.  Consider chat.  If it's
> one-to-one, end-to-end encryption is pretty simple and could be made simple
> to use; but people also want to chat rooms, which are a much more
> complicated key management problem - unless you let the server do the
> encryption.  Do you enable it only for one-to-one conversations?  Provide
> different interfaces for one-to-one and chat room discussions?
>
> Even for one-to-one discussions, these days, people want transparent
> movement across their hardw

Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 7:16 PM, james hughes wrote:
> Let me suggest the following. 
> 
> With RSA, a single quiet "donation" by the site and it's done. The situation 
> becomes totally passive and there is no possibility knowing what has been
> read.  The system administrator could even do this without the executives 
> knowing.
An additional helper:  Re-keying.  Suppose you send out a new public key, 
signed with your old one, once a week.  Keep the chain of replacements posted 
publicly so that someone who hasn't connected to you in a while can confirm the 
entire sequence from the last public key he knew to the current one.  If 
someone sends you a message with an invalid key (whether it was ever actually 
valid or not - it makes no difference), you just send them an update.

An attacker *could* sent out a fake update with your signature, but that would 
be detected almost immediately.  So a one-time "donation" is now good for a 
week.  Sure, the leaker can keep leaking - but the cost is now considerably
greater, and ongoing.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread Anne & Lynn Wheeler

note when the router hughes references was 1st introduced in in IETF gateway 
committee meeting as VPN it caused lots of turmoil in the IPSEC camp as well as 
with the other router vendors. The other router vendors went into standards 
stall mode ... their problem was none of them had a product with processors 
capable of handling the crypto processing. A month after the IETF meeting one 
of the vendors announced what was supposedly an equivalent product ... but was 
actually their standard product (w/o crypto) packaged with hardware link 
encryptors (needed dedicated links instead of being able to tunnel thru the 
internet).

The IPSEC camp whined a lot but eventually settled for referring to it as 
"lightweight" IPSEC (possibly trying to imply it didn't have equivalent crypto).

As to DNSSEC ... the simple scenario is requiring domain owners to register a 
public key and then all future communication is digitally signed and 
authenticated with the onfile, registered public key (as a countermeasure to 
domain name take-over which affects the integrity of the domain name 
infrastructure and propogates to SSL CA vendors if they can't trust who the 
true owner is). Then the SSL CA vendors can also start requiring that SSL 
certificate requests also be digitally signed ... which can also be 
authenticated by retrieving the onfile public key (turning an expensive, 
error-prone and time-consuming identification process into a reliable and 
simple authentication process). The catch22 is once public keys can be 
retrieved in realtime ... others can start doing it also ... going a long way 
towards eliminating need for SSL certificates. Have an option piggy-back public 
key in the same response with the ip-address. Then do SSL-lite ... XTP had 
reliable communication minim
um 3-pack
et exchange ... compared to TCP requiring minimum 7-packet exchange.

In the key escrow meetings, I lobbied hard that divulging/sharing authentication keys was 
violation of fundamental security principles. Other parties at the key escrow meetings 
whined that people could cheat and use authentication keys for encryption. However, there 
was commercial "no single point of failure" business case for replicating keys 
used in encrypting data-at-rest corporate assets.

One might hypothesis that some of the current DNSSEC complexity is FUD ... 
unable to kill it ... make it as unusable as possible.

disclaimer: person responsible for original DNS worked at the science center in 
the early 70s when he was at MIT.

--
virtualization experience starting Jan1968, online at home since Mar1970
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread James A. Donald

On 2013-09-09 6:08 AM, John Kelsey wrote:

a.  Things that just barely work, like standards groups, must in general be 
easier to sabotage in subtle ways than things that click along with great 
efficiency.  But they are also things that often fail with no help at all from 
anyone, so it's hard to tell.

b.  There really are tradeoffs between security and almost everything else.  If 
you start suspecting conspiracy every time someone is reluctant to make that 
tradeoff in the direction you prefer, you are going to spend your career 
suspecting everyone everywhere of being ant-security.  This is likely to be 
about as productive as going around suspecting everyone of being a secret 
communist or racist or something.

Poor analogy.

Everyone is a racist, and most people lie about it.

Everyone is a communist in the sense of being unduly influenced by 
Marxist ideas, and those few of us that know it have to make a conscious 
effort to see the world straight, to recollect that some of our supposed 
knowledge of the world has been contaminated by widespread falsehood.


The Climategate files revealed that official science /is/ in large part 
a big conspiracy against the truth.


And Snowden's files seem to indicate that all relevant groups are 
infiltrated by people hostile to security.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread James A. Donald

On 2013-09-09 4:49 AM, Perry E. Metzger wrote:

Your magic key must then take any block of N bits and magically
produce the corresponding plaintext when any given ciphertext
might correspond to many, many different plaintexts depending
on the key. That's clearly not something you can do.


Suppose that the mappings from 2^N plaintexts to 2^N ciphertexts are not 
random, but rather orderly, so that given one element of the map, one 
can predict all the other elements of the map.


Suppose, for example the effect of encryption was to map a 128 bit block 
to a group, map the key to the group, add the key to the block, and map 
back.  To someone who knows the group and the mapping, merely a heavily 
obfuscated 128 bit Caesar cipher.


No magic key.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread James A. Donald

On 2013-09-09 11:15 AM, Perry E. Metzger wrote:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.


Real world experience is that there is nothing to worry about /if you do 
it right/.  And that it is frequently not done right.


When you screw up AES or such, your test vectors fail, your unit test 
fails, so you fix it, whereas if you screw up entropy, everything 
appears to work fine.


It is hard, perhaps impossible, to have test suite that makes sure that 
your entropy collection works.


One can, however, have a test suite that ascertains that on any two runs 
of the program, most items collected for entropy are different except 
for those that are expected to be the same, and that on any run, any 
item collected for entropy does make a difference.


Does your unit test check your entropy collection?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 8, 2013, at 1:47 PM, Jerry Leichter  wrote:

> On Sep 8, 2013, at 3:51 PM, Perry E. Metzger wrote:
>> 
>> In summary, it would appear that the most viable solution is to make
>> the end-to-end encryption endpoint a piece of hardware the user owns
>> (say the oft mentioned $50 Raspberry Pi class machine on their home
>> net) and let the user interact with it over an encrypted connection
>> (say running a normal protocol like Jabber client to server
>> protocol over TLS, or IMAP over TLS, or https: and a web client.)
>> 
>> It is a compromise, but one that fits with the usage pattern almost
>> everyone has gotten used to. It cannot be done with the existing
>> cloud model, though -- the user needs to own the box or we can't
>> simultaneously maintain current protocols (and thus current clients)
>> and current usage patterns.

> I don't see how it's possible to make any real progress within the existing 
> cloud model, so I'm with you 100% here.  (I've said the same earlier.)

Could cloud computing be a red herring? Banks and phone companies all give up 
personal information to governments (Verizon?) and have been doing this long 
before and long after cloud computing was a fad. Transport encryption 
(regardless of its security) is no solution either. 

The fact is, to do business, education, health care, you need to share 
sensitive information. There is no technical solution to this problem. Shared 
data is shared data. This is arguably the same as the analogue gap between 
content protected media and your eyes and ears. Encryption is not a solution 
when the data needs to be shared with the other party in the clear. 

I knew a guy one that quipped "link encryptors are iron pipes rats run 
through". 

If compromised end points are your threat model, cloud computing is not your 
problem. 

The only solution is the Ted Kazinski technology rejection principal (as long 
as you also kill your brother).



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 3:08 PM, Perry E. Metzger  wrote:

> On Sun, 8 Sep 2013 08:40:38 -0400 Phillip Hallam-Baker
>  wrote:
> > The Registrars are pure marketing operations. Other than GoDaddy
> > which implemented DNSSEC because they are trying to sell the
> > business and more tech looks kewl during due diligence, there is
> > not a market demand for DNSSEC.
>
> Not to discuss this particular case, but I often see claims to the
> effect that "there is no market demand for security".
>
> I'd like to note two things about such claims.
>
> 1) Although I don't think P H-B is an NSA plant here, I do
> wonder about how often we've heard that in the last decade from
> someone trying to reduce security.
>

There is a market demand for security. But it is always item #3 on the list
of priorities and the top two get done.

I have sold seven figure crypto installations that have remained shelfware.

The moral is that we have to find other market reasons to use security. For
example simplifying administration of endpoints. I do not argue like some
do that there is no market for security so we should give up, I argue that
there is little market for something that only provides security and so to
sell security we have to attach it to something they want.




> 2) I doubt that safety is, per se, anything the market demands from
> cars, food, houses, etc. When people buy such products, they don't
> spend much time asking "so, this house, did you make sure it won't
> fall down while we're in it and kill my family?" or "this coffee mug,
> it doesn't leach arsenic into the coffee does it?"
>

People buy guns despite statistics that show that they are orders of
magnitude more likely to be shot with the gun themselves rather than by an
attacker.


However, if you told consumers "did you know that food manufacturer
> X does not test its food for deadly bacteria on the basis that ``there
> is no market demand for safety''", they would form a lynch mob.
> Consumers *presume* their smart phones will not leak their bank
> account data and the like given that there is a banking app for it,
> just as they *presume* that their toaster will not electrocute them.
>

Yes, but most cases the telco will only buy a fix after they have been
burned.

To sell DNSSEC we should provide a benefit to the people who need to do the
deployment. Problem is that the perceived benefit is to the people going to
the site which is different...


It is fixable, people just need to understand that the stuff does not sell
itself.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Sep 06, 2013 at 05:22:26PM -0700, John Gilmore wrote:
> Speaking as someone who followed the IPSEC IETF standards committee
> pretty closely, while leading a group that tried to implement it and
> make so usable that it would be used by default throughout the
> Internet, I noticed some things:
> ...

Speaking as one of the Security Area Directors at the time...

I have to disagree with your implication that the NSA intentionally
fouled the IPSEC working group. There were a lot of people working to
foul it up! I also don’t believe that the folks who participated,
including the folks from the NSA, were working to weaken the
standard. I suspect that the effort to interfere in standards started
later then the IPSEC work. If the NSA was attempting to thwart IETF
security standards, I would have expected to also see bad things in
the TLS working group and the PGP working group. There is no sign of
their interference there.

The real (or at least the first) problem with the IPSEC working group
was that we had a good and simple solution, Photuris. However the
document editor on the standard decided to claim it (Photuris) as his
intellectual property and that others couldn’t recommend changes
without his approval. This effectively made Photuris toxic in the
working group and we had to move on to other solutions. This is one of
the events that lead to the IETF’s “Note Well” document and clear
policy on the IP associated with contributions. Then there was the
ISAKMP (yes, an NSA proposal) vs. SKIP. As Security AD, I eventually
had to choose between those two standards because the working group
could not generate consensus. I believed strongly enough that we
needed an IPSEC solution so I decided to choose (as I promised the
working group I would do if they failed to!). I chose ISAKMP. I posted
a message with my rationale to the IPSEC mailing list, I’m sure it is
still in the archives. I believe that was in 1996 (I still have a copy
somewhere in my personal archives).

At no point was I contacted by the NSA or any agent of any government
in an attempt to influence my decision. Folks can choose to believe
this statement, or not.

IPSEC in general did not have significant traction on the Internet in
general. It eventually gained traction in an important niche, namely
VPNs, but that evolved later.

IPSEC isn’t useful unless all of the end-points that need to
communicate implement it. Implementations need to be in the OS (for
all practical purposes).  OS vendors at the time were not particularly
interested in encryption of network traffic.

The folks who were interested were the browser folks. They were very
interested in enabling e-commerce, and that required
encryption. However they wanted the encryption layer someplace where
they could be sure it existed. An encryption solution was not useful
to them if it couldn’t be relied upon to be there. If the OS the user
had didn’t have an IPSEC layer, they were sunk. So they needed their
own layer. Thus the Netscape guys did SSL, and Microsoft did PCT and
in the IETF we were able to get them to work together to create
TLS. This was a *big deal*. We shortly had one deployed interoperable
encryption standard usable on the web.

If I was the NSA and I wanted to foul up encryption on the Internet,
the TLS group is where the action was. Yet from where I sit, I didn’t
see any such interference.

If we believe the Edward Snowden documents, the NSA at some point
started to interfere with international standards relating to
encryption. But I don’t believe they were in this business in the
1990’s at the IETF.

-Jeff

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSLSMV8CBzV/QUlSsRAigkAKCU6erw1U7FOt7A1QdItlGbFRfo+gCfeMg1
0Woyz0FyKqKYqS+gZFQWEf0=
=yWOw
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Perry E. Metzger
On Sun, 08 Sep 2013 20:34:55 -0400 Kent Borg 
wrote:
> On 09/08/2013 06:16 PM, John Kelsey wrote:
> > I don't think you can do anything useful in crypto without some
> > good source of random bits.
> 
> I don't see the big worry about how hard it is to generate random 
> numbers unless:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Kent Borg

On 09/08/2013 06:16 PM, John Kelsey wrote:
I don't think you can do anything useful in crypto without some good 
source of random bits.


I don't see the big worry about how hard it is to generate random 
numbers unless:


 a) You need them super fast (because you are Google, trying to secure 
your very high-speed long lines), or


 b) You are some embedded device that is impoverished for both sources 
of entropy and non-volatile storage, and you need good random bits the 
moment you boot.


On everything in between, there are sources of entropy. Collect them, 
hash then together and use them to feed some good cryptography.  If you 
seem short of entropy, look for more in your hardware manual. Hash in 
any local unique information. Hash in everything you can find! (If the 
NSA knows every single bit you are hashing in, no harm, hash them in 
anyway, but...if the NSA has misunderestimated  any one of your 
bits...then you scored a bit! Repeat as necessary.)


I am thinking pure HW RNGs are more sinful than pure SW RNGs, because 
real world entropy is colored and hardware is the wrong place to fix 
that. So don't buy HW RNGs, buy HW entropy sources (or find them in your 
current HW) and feed them into a good hybrid RNG.


On a modern multi-GHz CPU the exact LSB of your highspeed system 
counters, when the interrupt hits your service routine, has uncertainty 
that is quite real once the you push the NSA a few centimeters from your 
CPU or SoC.  Just sit around until you have a few network packets and 
you can have some real entropy. Wait longer for more entropy.


In case you did notice, I am a fan of hybrid HW/SW RNGs.

-kb


P.S.  Entropy pools that are only saved on orderly shutdowns are risking 
crash-and-playback attacks. Save regularly, or something like that.


P.P.S. Don't try to estimate entropy, it is a fool's errand, get as much 
as you can (within reason) and feed it into some good cryptography.


P.P.P.S. Have an independent RNG? If it *is* independent, no harm in 
XORing it in.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Paper on Tor deanonymization: "Users Get Routed"

2013-09-08 Thread Perry E. Metzger
A new paper on the Tor network, entitled "Users Get Routed:
Traffic Correlation on Tor by Realistic Adversaries".

  https://security.cs.georgetown.edu/~msherr/papers/users-get-routed.pdf

Quote to whet your appetite:

We present the first analysis of the popular Tor anonymity network
that indicates the security of typical users against reasonably
realistic adversaries in the Tor network or in the underlying
Internet. Our results show that Tor users are far more susceptible
to compromise than indicated by prior work.
[...]
Our analysis shows that 80% of all types of users may be de-
anonymized by a relatively moderate Tor-relay adversary within six
months. Our results also show that against a single AS adversary
roughly 100% of users in some common locations are deanonymized
within three months (95% in three months for a single IXP). Fur-
ther, we find that an adversary controlling two ASes instead of
one reduces the median time to the first client de-anonymization
by an order of magnitude: from over three months to only 1 day
for a typical web user; and from over three months to roughly
one month for a BitTorrent user. This clearly shows the dramatic
effect an adversary that controls multiple ASes can have on
security.

Disclaimer: one of the authors (Micah Sherr) is a doctoral brother.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 3:51 PM, Perry E. Metzger wrote:
> 
>> Even for one-to-one discussions, these days, people want
>> transparent movement across their hardware.  If I'm in a chat
>> session on my laptop and leave the house, I'd like to be able to
>> continue on my phone.  How do I hand off the conversation - and the
>> keys?
> 
> I wrote about this a couple of weeks ago, see:
> 
> http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html
> 
> In summary, it would appear that the most viable solution is to make
> the end-to-end encryption endpoint a piece of hardware the user owns
> (say the oft mentioned $50 Raspberry Pi class machine on their home
> net) and let the user interact with it over an encrypted connection
> (say running a normal protocol like Jabber client to server
> protocol over TLS, or IMAP over TLS, or https: and a web client.)
> 
> It is a compromise, but one that fits with the usage pattern almost
> everyone has gotten used to. It cannot be done with the existing
> cloud model, though -- the user needs to own the box or we can't
> simultaneously maintain current protocols (and thus current clients)
> and current usage patterns.
I don't see how it's possible to make any real progress within the existing 
cloud model, so I'm with you 100% here.  (I've said the same earlier.)

What's hard is making this so simple and transparent that anyone can do it 
without thinking about it.  Again, think of the iMessage model:  If Apple 
hadn't larded it up with extra features (that, granted, most of its users 
probably want), we would today have tens of millions of people exchanging 
end-to-end, private messages without doing anything special or even thinking 
about it.  (Yes, Apple could have been forced to weaken it after the fact - but 
it would have had to be by sending an update that broke the software.)

Apple has built some surprisingly well protected stuff (along with some really 
broken stuff).  There's an analysis somewhere out there of how iOS device 
backups work.  Apple gives you a choice of an "encrypted" or an "unencrypted" 
backup.  Bizarrely, the "unencrypted" one actually has some of the most 
sensitive data encrypted using secret information *locked into the device 
itself* - where it would take significant hardware hacking (as far as anyone 
knows) to get at it; in an encrypted backup, this information is decrypted by 
the device, then encrypted with the backup key.  So in some ways, it's 
*stronger* - an "unencrypted" backup can only be restored to the iOS device 
that created it, while if you know the password, an "encrypted" backup can be 
restored to any device - which is the point.  (Actually, you can restore an 
"unencrypted" backup to a new device, too, but the most sensitive items - e.g., 
stored passwords - are lost as the information to access them is present only
  in the old device.)  You'd never really know any of this from Apple's 
extremely sparse documentation, mind you - it took someone hacking at the 
implementation to figure it out.

I don't agree, at all, with the claim that users are not interested in privacy 
or security.  But (a) they often don't know how exposed they are - something 
the Snowden papers are educating many about; (b) they don't know how to judge 
what's secure and what isn't (gee, can any of us, post-Snowden?); (c) 
especially given the previous two items, but even without them, there's a limit 
to how much crap they'll put up with.  The bar for perceived quality and 
simplicity of interface these days is - thanks mainly to Apple - incredibly 
high.  Techies may bitch and moan that this is all just surface glitz, that 
what's important is underneath - but if you want to reach beyond a small 
coterie of hackers, you have to get that stuff right.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: "NSA Can Spy on Smart Phone Data"

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 6:09 PM, Perry E. Metzger wrote:
> Not very surprising given everything else, but I thought I would
> forward the link. It more or less contends that the NSA has exploits
> for all major smartphones, which should not be surprising

> http://www.spiegel.de/international/world/privacy-scandal-nsa-can-spy-on-smart-phone-data-a-920971.html
A remarkably poor article.  Just what does "gain access to" mean?  There are 
boxes sold to law enforcement (but never, of course, to the bad guys) that 
claim they can get access to any phone out there.  If it's unlocked, everything 
is there for the taking; if it's locked, *some* of it is hard to get to, but 
most isn't.  Same goes for Android.

The article mentions that if they can get access to a machine the iPhone syncs 
with, they can get into the iPhone.  Well golly gee.  There was an attack 
reported just in the last couple of weeks in which someone built an attack into 
a fake charger!  Grab a charge at a public charger, get infected for  your 
trouble.  Apple's fixed that in the next release by prompting the user for 
permission whenever an unfamiliar device asks for connection.  But if you're in 
the machine the user normally connects to, that won't help.  Nothing, really, 
will help.

Really, for the common phones out there, the NSA could easily learn how to do 
this stuff with a quick Google search - and maybe paying a couple of thousand 
bucks to some of the companies that do it for a living.

The article then goes on to say the NSA can get SMS texts.  No kidding - so can 
the local cops.  It's all unencrypted, and the Telco's are only too happy to 
cooperate with govmint' agencies.

The only real news in the whole business is that they claim to have gotten into 
Blackberry's mail system.  It's implied that they bought an employee with the 
access needed to weaken things for them.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: "NSA Can Spy on Smart Phone Data"

2013-09-08 Thread Tony Naggs
The Spiegel article perhaps contains a key to this capability:
"In the internal documents, experts boast about successful access to
iPhone data in instances where the NSA is able to infiltrate the
computer a person uses to sync their iPhone."

I have not seen security measures such as requiring a password from
the connected computer to a phone in order to access data such as
contact lists, SMS history, ..

This is probably done simply in order to provide maximum convenience
to end users.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 7, 2013, at 8:16 PM, "Marcus D. Leech"  wrote:

> But it's not entirely clear to me that it will help enough in the scenarios 
> under discussion.  If we assume that mostly what NSA are doing is acquiring a 
> site
>RSA key (either through "donation" on the part of the site, or through 
> factoring or other means), then yes, absolutely, PFS will be a significant 
> roadblock.
>If, however, they're getting session-key material (perhaps through 
> back-doored software, rather than explicit cooperation by the target 
> website), the
>PFS does nothing to help us.  And indeed, that same class of compromised 
> site could just as well be leaking plaintext.  Although leaking session
>keys is lower-profile.

I think we are growing closer to agreement, PFS does help, the question is how 
much in the face of cooperation. 

Let me suggest the following. 

With RSA, a single quiet "donation" by the site and it's done. The situation 
becomes totally passive and there is no possibility knowing what has been read. 
 The system administrator could even do this without the executives knowing. 

With PFS there is a significantly higher profile interaction with the site. 
Either the session keys need to be transmitted  in bulk, or the RNG cribbed. 
Both of these have a significantly higher profile,  higher possibility of 
detection and increased difficulty to execute properly. Certainly a more risky 
think for a cooperating site to do. 

PFS does improve the situation even if cooperation is suspect. IMHO it is just 
better cryptography. Why not? 

It's better. It's already in the suites. All we have to do is use it... 

I am honestly curious about the motivation not to choose more secure modes that 
are already in the suites?



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Viktor Dukhovni
On Sun, Sep 08, 2013 at 06:16:45PM -0400, John Kelsey wrote:

> I don't think you can do anything useful in crypto without some
> good source of random bits.  If there is a private key somewhere
> (say, used for signing, or the public DH key used alongside the
> ephemeral one), you can combine the hash of that private key into
> your PRNG state.  The result is that if your entropy source is bad,
> you get security to someone who doesn't compromise your private
> key in the future, and if your entropy source is good, you get
> security even against someone who compromises your private key in
> the future (that is, you get perfect forward secrecy).

Nice in theory of course, but in practice applications don't write
their own PRNGS.  They use whatever the SSL library provides, OpenSSL,
GnuTLS, ...  If we assume weak PRNGS in the toolkit (or crypto chip,
...) then EDH could be weaker than RSA key exchange (provided the
server's key is strong enough).

The other concern is that in practice many EDH servers offer 1024-bit
primes, even after upgrading the certificate strength to 2048-bits.

Knee-jerk reactions to very murky information may be counter-productive.
Until there are more specific details,  it is far from clear which is 
better:

- RSA key exchange with a 2048-bit modulus.

- EDH with (typically) 1024-bit per-site strong prime modulus

- EDH with RFC-5114 2048-bit modulus and 256-bit "q" subgroup.

- EECDH using secp256r1

Until there is credible information one way or the other, it may
be best to focus on things we already know make sense:

- keep up with end-point software security patches

- avoid already known weak crypto (RC4?)

- Make sure VM provisioning includes initial PRNG seeding.

- Save entropy across reboots.

- ...

Yes PFS addresses after the fact server private key compromise,
but there is some risk that we don't know which if any of the PFS
mechanisms to trust, and implementations are not always well
engineered (see my post about GnuTLS and interoperability).

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] AES state of the art...

2013-09-08 Thread Perry E. Metzger
What's the current state of the art of attacks against AES? Is the
advice that AES-128 is (slightly) more secure than AES-256, at least
in theory, still current?

(I'm also curious as to whether anyone has ever proposed fixes to the
weaknesses in the key schedule...)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread John Kelsey
On Sep 8, 2013, at 3:55 PM, Thor Lancelot Simon  wrote:
...
> I also wonder -- again, not entirely my own idea, my whiteboard partner
> can speak up for himself if he wants to -- about whether we're going
> to make ourselves better or worse off by rushing to the "safety" of
> PFS ciphersuites, which, with their reliance on DH, in the absence of
> good RNGs may make it *easier* for the adversary to recover our eventual
> symmetric-cipher keys, rather than harder!

I don't think you can do anything useful in crypto without some good source of 
random bits.  If there is a private key somewhere (say, used for signing, or 
the public DH key used alongside the ephemeral one), you can combine the hash 
of that private key into your PRNG state.  The result is that if your entropy 
source is bad, you get security to someone who doesn't compromise your private 
key in the future, and if your entropy source is good, you get security even 
against someone who compromises your private key in the future (that is, you 
get perfect forward secrecy).

> Thor

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Der Spiegel: "NSA Can Spy on Smart Phone Data"

2013-09-08 Thread Perry E. Metzger
Not very surprising given everything else, but I thought I would
forward the link. It more or less contends that the NSA has exploits
for all major smartphones, which should not be surprising.

Quoting:

 The United States' National Security Agency
 intelligence-gathering operation is capable of accessing user
 data from smart phones from all leading manufacturers. Top
 secret NSA documents that SPIEGEL has seen explicitly note that
 the NSA can tap into such information on Apple iPhones,
 BlackBerry devices and Google's Android mobile operating system.

http://www.spiegel.de/international/world/privacy-scandal-nsa-can-spy-on-smart-phone-data-a-920971.html

Note that companies frequently give their users VPN access via
such devices, which means that they have something on them more
dangerous than the phone contacts etc. that the article mentions,
specifically access credentials. Such devices are also now frequently
used to provide second factors for authentication.

-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Christian Huitema
> Not to discuss this particular case, but I often see claims to the
> effect that "there is no market demand for security".

Bill Gates 2003 "trustworthy computing" memo is a direct proof of the
opposite. He perceived lack of security, shown by reports of worms and
viruses, as a direct threat against continued sales of Windows products. And
then he proceeded to direct the company to spend billions to improve the
matter. Say what you want about BillG, but he is pretty good at assessing
market demand.

-- Christian Huitema


 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Eugen Leitl

Forwarded with permission.

So there *is* a BTNS implementation, after all. Albeit
only for OpenBSD -- but this means FreeBSD is next, and
Linux to follow.

- Forwarded message from Andreas Davour  -

Date: Sun, 8 Sep 2013 09:10:44 -0700 (PDT)
From: Andreas Davour 
To: Eugen Leitl 
Subject: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
X-Mailer: YahooMailWebService/0.8.156.576
Reply-To: Andreas Davour 

> Apropos IPsec, I've tried searching for any BTNS (opportunistic encryption 
> mode for
> IPsec) implementations, and even the authors of the RFC are not aware of any. 
> Obviously, having a working OE BTNS implementation in Linux/*BSD would be a 
> very valuable thing, as an added, transparent protection layer against 
> passive attacks. There are many IPsec old hands here, it is probably just a 
> few man-days
> worth of work. It should be even possible to raise some funding for such a 
> project. Any takers?


Hi. I saw this message in the archive, and have not figured out how to reply to 
that one. But I felt this knowledge needed to be spread. Maybe you can post it 
to the list?

My friend "MC" have in fact implemented BTNS! Check this out: 
http://hack.org/mc/projects/btns/

I think I can speak for him and say that he would love to have that 
implementation be known to the others on the list, and would love others to add 
to his work, so we can get real network security without those spooks spoiling 
things.


/andreas
--
"My son has spoken the truth, and he has sacrificed more than either the 
president of the United States or Peter King have ever in their political 
careers or their American lives. So how they choose to characterize him really 
doesn't carry that much weight with me." -- Edward Snowden's Father

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Perry E. Metzger
On Sun, 8 Sep 2013 15:55:52 -0400 Thor Lancelot Simon
 wrote:
> On Sun, Sep 08, 2013 at 03:22:32PM -0400, Perry E. Metzger wrote:
> > 
> > Ah, now *this* is potentially interesting. Imagine if you have a
> > crypto accelerator that generates its IVs by encrypting
> > information about keys in use using a key an observer might have
> > or could guess from a small search space.
> > 
> > Hadn't even occurred to me since it seems way more blatant than
> > the other sort of leaks I was thinking of, but of course the mere
> > fact that it is blatant doesn't mean that it would never be
> > tried...
> 
> Well, I guess it depends what your definition of "blatant" is.
> Treating the crypto hardware as a black box, it would be freaking
> hard to detect, no?

Ah, but it only needs to be found once to destroy the reputation of a
company.

Inserting bugs into chips (say, random number generators that won't
work well in the face of fabrication processes that alter analog
characteristics of circuits slightly) results in a "could be an
accident" sort of mistake. Altering a chip to insert an encrypted
form of a key into the initialization vectors in use cannot be
explained away that way.

You may say "but how would you find that?". However, I've worked
in recent years with people who decap chips, photograph the surface
and reconstruct the circuits on a pretty routine basis -- tearing
apart secure hardware for fun and profit is their specialty. Even
when this process destructively eliminates in-RAM programming,
usually weaknesses such as power glitching attacks are discovered by
the examination of the "dead" system on the autopsy table and can
then be used with live hardware.

Now that it has been revealed that the NSA has either found or
arranged for bugs in several chips, I would presume that some of
these people are gearing up for major teardowns. Not all
such teardowns will happen in the open community, of course -- I'd
expect that even now there are folks in government labs around the
world readying their samples, their probe stations and their etchant
baths. Hopefully the guys in the open community will let us know
what's bad before the other folks start exploiting our hardware
silently, as I suspect the NSA is not going to send out a warning.

> I also wonder -- again, not entirely my own idea, my whiteboard
> partner can speak up for himself if he wants to -- about whether
> we're going to make ourselves better or worse off by rushing to the
> "safety" of PFS ciphersuites, which, with their reliance on DH, in
> the absence of good RNGs may make it *easier* for the adversary to
> recover our eventual symmetric-cipher keys, rather than harder!

I'll repeat the same observation I've made a lot: Dorothy Denning's
description of the Clipper chip key insertion ceremony described the
keys as being generated deterministically using an iterated block
cipher. I can't find the reference, but I'm pretty sure that when she
was asked why, the rationale was that an iterated block cipher can be
audited, and a hardware randomness source cannot.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Thor Lancelot Simon
On Sun, Sep 08, 2013 at 03:22:32PM -0400, Perry E. Metzger wrote:
> 
> Ah, now *this* is potentially interesting. Imagine if you have a
> crypto accelerator that generates its IVs by encrypting information
> about keys in use using a key an observer might have or could guess
> from a small search space.
> 
> Hadn't even occurred to me since it seems way more blatant than
> the other sort of leaks I was thinking of, but of course the mere
> fact that it is blatant doesn't mean that it would never be tried...

Well, I guess it depends what your definition of "blatant" is.  Treating
the crypto hardware as a black box, it would be freaking hard to detect,
no?  And not so easy even if you're willing to go at the thing at the
gate level.  You could end up forced to examine everything attached to
any of your crypto chip's I/Os, too, and it goes rapidly downhill from
there...

When we build protocols that have data elements we *expect* to be random,
and rely on cryptographic primitives whose outputs we expect to be
indistinguishable from random, we kind of set ourselves up for this
type of attack.

Not that I see an easy way not to.

I also wonder -- again, not entirely my own idea, my whiteboard partner
can speak up for himself if he wants to -- about whether we're going
to make ourselves better or worse off by rushing to the "safety" of
PFS ciphersuites, which, with their reliance on DH, in the absence of
good RNGs may make it *easier* for the adversary to recover our eventual
symmetric-cipher keys, rather than harder!

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread John Denker
On 09/08/2013 12:08 PM, Perry E. Metzger wrote:
> I doubt that safety is, per se, anything the market demands from
> cars, food, houses, etc.

I wouldn't have said that.  It's a lot more complicated than
that.  For one thing, there are lots of different "people".
However, as a fairly-general rule, people definitely do 
consider safety as part of their purchasing decisions.
 -- Why do you think there are layers of tamper-evident
  packaging on Tylenol (and lots of other things)?  Note that
  I was not kidding when I suggested tamper-evident data
  security measures.  Not only do responsible vendors want
  the product to be safe when it leaves the factor, they want 
  to make sure it /stays/ safe.
 -- Any purchaser with an ounce of sense will hire an inspector
  to check over a house before putting down a deposit.  Sales
  contracts require the seller to disclose any known defects,
  and generally provide some sort of warranty.
 ++ Forsooth, if people bought crypto as carefully as they buy
   houses, we'd all be a lot better off.
 -- In many cases, consumers do not -- and cannot -- /directly/
  evaluate safety and quality, so they rely on third parties.
  One familiar example is the airline industry.  The airlines
  generally /like/ being regulated by the FAA because by and 
  large the good guys already exceed FAA safety standards, and 
  they don't want some bad guy coming in and giving the whole
  industry a bad name.
 -- I imagine food and drug safety is similar, although the
  medical industry complains about over-regulation more than
  I would have expected.
 -- There are also non-governmental evaluation agencies, such
  as Underwriters' Laboratories and Earth Island Institute.

 ** There are of course /some/ people who court disaster.  For
  example, there are folks who consider seatbelt laws and motorcycle
  helmet laws to be oppressive government regulation.  These are
  exceptions to the trends discussed above, but they do not 
  invalidate the overall trends.

 !! Note that even if you are doing everything you know how to do,
  you can still get sued on the grounds of negligence and deception
  if something goes wrong ... especially (but not only) if you said
  it was safer than it was.  Example:  Almost every plane crash ever.

  Let's be clear:  A lot of consumer "demands" for safety are made
  retroactively.  "Caveat emptor" has been replaced by /caveat vendor/.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread John Kelsey
In principle, the malevolent crypto accellerator could flip into weak mode 
(however that happens) only upon receiving a message for decryption with some 
specific value or property.  That would defeat any testing other than constant 
observation.  This is more or less the attack that keeps parallel testing of 
electronic voting machines from being a good answer to the security concerns 
about them.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread John Kelsey
As an aside:

a.  Things that just barely work, like standards groups, must in general be 
easier to sabotage in subtle ways than things that click along with great 
efficiency.  But they are also things that often fail with no help at all from 
anyone, so it's hard to tell.

b.  There really are tradeoffs between security and almost everything else.  If 
you start suspecting conspiracy every time someone is reluctant to make that 
tradeoff in the direction you prefer, you are going to spend your career 
suspecting everyone everywhere of being ant-security.  This is likely to be 
about as productive as going around suspecting everyone of being a secret 
communist or racist or something.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Usage models (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-08 Thread Perry E. Metzger
On Sun, 8 Sep 2013 14:50:07 -0400 Jerry Leichter 
wrote:
> Even for one-to-one discussions, these days, people want
> transparent movement across their hardware.  If I'm in a chat
> session on my laptop and leave the house, I'd like to be able to
> continue on my phone.  How do I hand off the conversation - and the
> keys?

I wrote about this a couple of weeks ago, see:

http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html

In summary, it would appear that the most viable solution is to make
the end-to-end encryption endpoint a piece of hardware the user owns
(say the oft mentioned $50 Raspberry Pi class machine on their home
net) and let the user interact with it over an encrypted connection
(say running a normal protocol like Jabber client to server
protocol over TLS, or IMAP over TLS, or https: and a web client.)

It is a compromise, but one that fits with the usage pattern almost
everyone has gotten used to. It cannot be done with the existing
cloud model, though -- the user needs to own the box or we can't
simultaneously maintain current protocols (and thus current clients)
and current usage patterns.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] A Likely Story!

2013-09-08 Thread Peter Fairbrother
This is just a wild story, It isn't true. If we cryptographers found it 
was true we would all be totally gobsmacked.


The Beginning:

Sometime in 2008 the NSA - the United States National Security Agency, 
who employ many times more mathematicians than anyone else does - 
discovered a new mathematical way to factorise big numbers better.


It wasn't a huge advance, but it would be good enough for them to 
factorise several hundred 1024-bit-long numbers per month using some big 
computers they wanted to build.


In the form of RSA public keys, these 1024-bit numbers were (and 
sometimes still are) used to generate the session keys which encrypt and 
protect internet traffic.


A session key is the key which is used to encrypt the traffic between 
you and a website, using a normal cipher - it is a shared secret between 
you and the website.


Setting up a shared secret session key, when the communications used to 
set it up may also be intercepted, is quite difficult and involves 
considerable tricky math. That's where RSA and factorising comes in.


In 2008, when you saw a little padlock in your browser, the connection 
was almost always encrypted using a session key whose secrecy depends on 
the inability of anybody to factorise those 1024-bit RSA numbers.


They change every few years, but usually each big website only uses one 
RSA key per country  - so when the NSA factorised just one of those RSA 
keys it could easily find the session keys for all the internet sessions 
that website had made in that country for a couple of years.


Now the NSA had been collecting internet traffic for years, and when the 
big computers were built they would be able to see your past and present 
online banking, your secret medical history, the furlined handcuffs you 
bought online ..



The Dilemma:

So, did the NSA then go "Hooray, full steam ahead?" Not quite. The NSA 
has two somewhat conflicting missions: to be able to spy on people's 
communications, and to keep government communications secure.


On the one hand, if they continued to recommend that government people 
use 1024-bit RSA they could be accused of failing their mission to 
protect government communications.


On the other hand, if they told ordinary people not to use 1024-bit RSA, 
they could be accused of failing their mission to spy on people.


What to do?



Some Background:

Instead of using 1024-bit RSA to set up session keys, people could use a 
different way, called ECDHE. That stands for elliptic curve Diffie 
Hellman (ephemeral), the relevant bit here being "elliptic curve".


You can use any one of trillions of different elliptic curves,which 
should be chosen partly at random and partly so they are the right size 
and so on; but you can also start with some randomly-chosen numbers then 
work out a curve from those numbers. and you can use those random 
numbers to break the session key setup.


The other parts are: starting from the curve, you can't in practice find 
the numbers, it's beyond the capabilities of the computers we have. So 
those if you keep those random numbers you started with secret, only you 
can break the ECDHE mechanism. Nobody else can.


And the last part - it is convenient for everybody to use the same 
elliptic curve, or perhaps one or two curves for different purposes. So 
if you know the secret numbers for the curve, you can break everybody's 
key setup and get the secret session keys for all the traffic which uses 
those curves.



The Solution:

Make government people use ECDHE instead of RSA, but with the NSA's 
special backdoored elliptic curves. Ordinary people will follow suit.


This solves both problems - when people change to the new system the NSA 
can still break their internet sessions, and government communications 
are safe from other people (although the NSA can break US government 
communications easily - but hey, that's the price of doing business, and 
we're the NSA, right?).


Someone else might find the factoring improvement, but it is thought 
infeasible that someone else would be able to find the secret backdoor.



"Hooray, full steam ahead!"


That's the story.

The rest is just details - maybe the NSA somehow got NIST to put their 
special backdoored curves into NIST FIPS 186-3 recommendations in 2009, 
so people would use them rather than make up curves of their own - it is 
usual and convenient, but not strictly necessary, for ECDHE software to 
only be able too use a small selection of curves.


Maybe they asked the US Congress for several billion in extra funding in 
the 2010 budget to run the RSA-breakers.


Maybe they are building a new "data center" in Utah to use the session 
keys to decrypt the communications they have intercepted over the years.


Maybe they put those special backdoored curves into Suite B, their 
official requirements for US Government secret and top secret 
communications.



Or maybe they didn't. It's just a story, after all. The cryptography, 
while incomplete,

[Cryptography] Points of compromise

2013-09-08 Thread Phillip Hallam-Baker
I was asked to provide a list of potential points of compromise by a
concerned party. I list the following so far as possible/likely:


1) Certificate Authorities

Traditionally the major concern (perhaps to the point of distraction from
other more serious ones). Main caveat, CA compromises leave permanent
visible traces as recent experience shows and there are many eyes looking.
Even if Google was compromised I can't believe Ben Laurie and Adam Langley
are proposing CT in bad faith.


2) Covert channel in Cryptographic accelerator hardware.

It is possible that cryptographic accelerators have covert channels leaking
the private key through TLS (packet alignment, field ordering, timing,
etc.) or in key generation (kleptography of the RSA modulus a la Motti
Young).


3) Cryptanalytic attack on one or more symmetric algorithms.

I can well believe that RC4 is bust and that there is enough RC4 activity
going on to make cryptanalysis worth while. The idea that AES is
compromised seems very less likely to me.


4) Protocol vulnerability introduced intentionally through IETF

I find this rather unlikely to be a direct action since there are few
places where the spec could be changed to advantage an attacker and only
the editors would have the control necessary to introduce text and there
are many eyes.


5) Protocol vulnerability that IETF might have fixed but was discouraged
from fixing.

Oh more times than I can count. And I would not discount the possibility
that there would be strategies based exploiting on the natural suspicion
surrounding security matters. It would have been easy for a faction to
derail DNSSEC by feeding the WG chair's existing hostility to CAs telling
him to stand firm.


One concern here is that this will fuel the attempt to bring IETF under
control of the ITU and Russia, China, etc.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 1:08 PM, Jerry Leichter wrote:

> On Sep 8, 2013, at 1:06 PM, Jerry Leichter wrote:
>> There was a proposal out there based on something very much like this to 
>> create tamper-evident signatures
Jonathan Katz found the paper I was thinking of - 
http://eprint.iacr.org/2003/031
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:16 PM, Marcus D. Leech wrote:
> Jeff Schiller pointed out a little while ago that the crypto-engineering 
> community have largely failed to make end-to-end encryption easy to use.  
> There are reasons for that, some technical, some political, but it is 
> absolutely true that end-to-end encryption, for those cases where "end to 
> end" is the obvious and natural model, has not significantly materialized on 
> the Internet.  Relatively speaking, a handful of crypto-nerds use end-to-end 
> schemes for e-mail and chat clients, and so on, but the vast majority of the 
> Internet user-space?  Not so much.
I agree, but the situation is complicated.  Consider chat.  If it's one-to-one, 
end-to-end encryption is pretty simple and could be made simple to use; but 
people also want to chat rooms, which are a much more complicated key 
management problem - unless you let the server do the encryption.  Do you 
enable it only for one-to-one conversations?  Provide different interfaces for 
one-to-one and chat room discussions?

Even for one-to-one discussions, these days, people want transparent movement 
across their hardware.  If I'm in a chat session on my laptop and leave the 
house, I'd like to be able to continue on my phone.  How do I hand off the 
conversation - and the keys?  (What this actually shows is the complexity of 
defining "the endpoint".  From the protocol's point of view, the endpoint is 
first my laptop, then my phone.  From the user's point of view, the endpoint is 
 the user!  How do we reconcile these points of view?  Or does the difference 
go away if we assume the endpoint is always the phone, since it's always with 
me anyway?)

The same kinds of questions arise for other communications modalities, but are 
often more complex.  One-to-one voice?  Sure, we could easily end-to-end 
encrypt that.  But these days everyone expects to do conference calls.  
Handling those is quite a bit more complex.

There does appear to be some consumer interest here.  Apple found it worthwhile 
to advertise that iMessage - which is used in a completely transparent way, you 
don't even have to opt in for it to replace SMS for iOS to iOS messages - is 
end-to-end encrypted.  (And, it appears that it *is* end-to-end encrypted - but 
unfortunately key establishment protocols leave Apple with the keys - which 
allows them to provide useful services, like making your chat logs visible on 
brand new hardware, but also leaves holes of course.)  Silent Circle, among 
others, makes their living off of selling end-to-end encrypted chat sessions, 
but they've got a tiny, tiny fraction of the customer base Apple has.

I think you first need to decide *exactly* what services you're going to 
provide in a secure fashion, and then what customers are willing to do without 
(multi-party support, easy movement to new devices, backwards compatibility 
perhaps) before you can begin to design something new with any chance of 
success.
-- Jerry



-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Tim Newsham
On Sun, Sep 8, 2013 at 2:28 AM, Phillip Hallam-Baker  wrote:
> This would be 'Code Transparency'.
>
> Problem is we would need to modify GIT to implement.

Git already supports signed comments. See the "-S" option to "git commit.
If you're paranoid, though, that still leaves someone getting on your
dev box and slipping in a small patch into code you're about to commit, or
just using your pgp keys themselves...

Next problems -- getting the right key to verify against.  Knowing what sets
of keys are allowed to sign for a particular project.

> Website: http://hallambaker.com/

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ben Laurie
On 8 September 2013 11:45, Peter Gutmann  wrote:

> Ralph Holz  writes:
>
> >BTW, I do not really agree with your argument it should be done via TLS
> >extension.
>
> It's done that way based on discussions on (and mostly off) the TLS list by
> various implementers, that was the one that caused the least dissent.
>

BTW, Steve Henson just pushed an implementation to the master branch of
OpenSSL.

We need to get an extension number allocated, since the one it uses clashes
with ALPN.


>
> Peter.
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Perry E. Metzger
On Sun, 8 Sep 2013 15:10:45 -0400 Thor Lancelot Simon 
wrote:
> On Sun, Sep 08, 2013 at 02:34:26PM -0400, Perry E. Metzger wrote:
> > 
> > Any other thoughts on how one could sabotage hardware? An
> > exhaustive list is interesting, if only because it gives us
> > information on what to look for in hardware that may have been
> > tweaked at NSA request.
> 
> I'd go for leaking symmetric cipher key bits into exposed RNG
> output: nonces, explicit IVs, and the like.  Crypto hardware with
> "macro" or "record" operations (ESP or TLS record/packet handling
> as a single operation; TLS or IKE handshake, etc.) offers ample
> opportunities for this, but surely it could be arranged even with
> simpler hardware that just happens to accellerate both, let's say,
> AES and random number generation.

Ah, now *this* is potentially interesting. Imagine if you have a
crypto accelerator that generates its IVs by encrypting information
about keys in use using a key an observer might have or could guess
from a small search space.

Hadn't even occurred to me since it seems way more blatant than
the other sort of leaks I was thinking of, but of course the mere
fact that it is blatant doesn't mean that it would never be tried...

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware (Re: Suite B after today's news)

2013-09-08 Thread Thor Lancelot Simon
On Sun, Sep 08, 2013 at 02:34:26PM -0400, Perry E. Metzger wrote:
> 
> Any other thoughts on how one could sabotage hardware? An exhaustive
> list is interesting, if only because it gives us information on what
> to look for in hardware that may have been tweaked at NSA request.

I'd go for leaking symmetric cipher key bits into exposed RNG output:
nonces, explicit IVs, and the like.  Crypto hardware with "macro" or
"record" operations (ESP or TLS record/packet handling as a single
operation; TLS or IKE handshake, etc.) offers ample opportunities for
this, but surely it could be arranged even with simpler hardware that
just happens to accellerate both, let's say, AES and random number
generation.

Thor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Perry E. Metzger
On Sun, 8 Sep 2013 08:40:38 -0400 Phillip Hallam-Baker
 wrote:
> The Registrars are pure marketing operations. Other than GoDaddy
> which implemented DNSSEC because they are trying to sell the
> business and more tech looks kewl during due diligence, there is
> not a market demand for DNSSEC.

Not to discuss this particular case, but I often see claims to the
effect that "there is no market demand for security".

I'd like to note two things about such claims.

1) Although I don't think P H-B is an NSA plant here, I do
wonder about how often we've heard that in the last decade from
someone trying to reduce security.

2) I doubt that safety is, per se, anything the market demands from
cars, food, houses, etc. When people buy such products, they don't
spend much time asking "so, this house, did you make sure it won't
fall down while we're in it and kill my family?" or "this coffee mug,
it doesn't leach arsenic into the coffee does it?"

Consumers, rightfully, presume that reasonable vendors *naturally*
did not design products that would kill them and they focus instead
on the other desirable characteristics, like comfort or usability or
what have you.

However, if you told consumers "did you know that food manufacturer
X does not test its food for deadly bacteria on the basis that ``there
is no market demand for safety''", they would form a lynch mob.
Consumers *presume* their smart phones will not leak their bank
account data and the like given that there is a banking app for it,
just as they *presume* that their toaster will not electrocute them.

If you ever say "we're not worrying about security in our systems
because there's no market demand for it", you had better make sure
not to say it in public from now on, because the peasants with
pitchforks and torches will eventually find you if they catch wind of
it.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Perry E. Metzger
On Sat, 07 Sep 2013 20:14:10 -0700 Ray Dillinger 
wrote:
> On 09/06/2013 05:58 PM, Jon Callas wrote:
> 
> > We know as a mathematical theorem that a block cipher with a back
> > door *is* a public-key system. It is a very, very, very valuable
> > thing, and suggests other mathematical secrets about hitherto
> > unknown ways to make fast, secure public key systems.
> 
> 
> I've seen this assertion several times in this thread, but I cannot
> help thinking that it depends on what *kind* of backdoor you're
> talking about, because there are some cases in which as a crypto
> amateur I simply cannot see how the construction of an asymmetric
> cipher could be accomplished.
> 
> As an example of a backdoor that doesn't obviously permit an
> asymmetric-cipher construction, consider a broken cipher that
> has 128-bit symmetric keys; but one of these keys (which one
> depends on an IV in some non-obvious way that's known to the
> attacker) can be used to decrypt any message regardless of the
> key used to encrypt it.

That key would then be known as the "private key". The "public key"
is the set of magic values used in the symmetric cipher (say in the
one way functions of the Feistel network if it were a Feistel cipher)
such that such a magic decryption key exists.

> However, it is not a valid encryption key; no matter what you
> encrypt with it you get the same ciphertext.

So? If you have an algorithm that creates such ciphers in such a way
that the magic key is hard to find, then you produce all that you want
and you have a very powerful primitive for constructing public key
systems. You don't have an obvious signature algorithm yet, but I'm
sure we can think of one with a touch of cleverness.

That said, your hypothetical seems much like "imagine that you can
float by the power of your mind alone". The construction of such a
cipher with a single master key that operates just like any other key
seems nearly impossible, and that should be obvious.

A symmetric cipher encryption function is necessarily one-to-one and
onto from the set of N bit blocks to itself. After all, if two blocks
encrypt to the same block, you can't decrypt them, and one block
can't encrypt to two blocks. If every key produces the same function
from 2^N to 2^N, it will be rapidly obvious, so keys have to produce
quite different mappings.

Your magic key must then take any block of N bits and magically
produce the corresponding plaintext when any given ciphertext
might correspond to many, many different plaintexts depending
on the key. That's clearly not something you can do.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Techniques for malevolent crypto hardware (Re: Suite B after today's news)

2013-09-08 Thread Perry E. Metzger
On Sat, 07 Sep 2013 19:19:09 -0700 Ray Dillinger 
wrote:
> Given some of the things in the Snowden files, I think it has
> become the case that one ought not trust any mass-produced crypto
> hardware.

Yes and no. There are limits to what such hardware can do. If such
hardware fails to implement a symmetric algorithm correctly, that
failure will be entirely obvious since interoperation will fail
immediately. If it uses bad random numbers, that failure will be
subtle.

The most obvious implementation defects are bad RNGs and bad
protection against timing analysis.

One might also add side channels to leak information. Obvious side
channels for malevolent hardware are radio frequency interference (if
you can deploy listening equipment in the same colo this might be
quite a practical way to extract information) and timing channels
(not only in the sense of failure to protect against timing analysis
but also in the sense of using inter-event delays to encode
information like keys).

I think that in most applications power consumption side channels are
probably not that interesting (smart cards etc. being an exception)
but I'm prepared to be proven wrong.

Any other thoughts on how one could sabotage hardware? An exhaustive
list is interesting, if only because it gives us information on what
to look for in hardware that may have been tweaked at NSA request.

> Given good open-source software, an FPGA implementation would
> provide greater assurance of security.

I wonder, though, if one could add secret layers to FPGAs to leak
interesting information in some manner. It seems unlikely, but I
might simply not be creative enough in thinking about it.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Why are some protocols hard to deploy? (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Perry E. Metzger
On Sat, 07 Sep 2013 18:50:06 -0700 John Gilmore  wrote:
> It was never clear to me why DNSSEC took so long to deploy,
[...]
> PS: My long-standing domain registrar (enom.com) STILL doesn't
> support DNSSEC records -- which is why toad.com doesn't have DNSSEC
> protection.  Can anybody recommend a good, cheap, reliable domain
> registrar who DOES update their software to support standards from
> ten years ago?

I believe you have answered your own question there, John. Even if we
assume subversion, deployment requires cooperation from too many
people to be fast.

One reason I think it would be good to have future key management
protocols based on very lightweight mechanisms that do not require
assistance from site administrators to deploy is that it makes it
ever so much easier for things to get off the ground. SSH deployed
fast because one didn't need anyone's cooperation to use it -- if you
had root on a server and wanted to log in to it securely, you could
be up and running in minutes.

We need to make more of our systems like that. The problem with
DNSSEC is it is so obviously architecturally "correct" but so
difficult to do deploy without many parties cooperating that it has
acted as an enormous tar baby.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Ray Dillinger

On 09/08/2013 04:27 AM, Eugen Leitl wrote:


On 2013-09-08 3:48 AM, David Johnston wrote:

Claiming the NSA colluded with intel to backdoor RdRand is also to
accuse me personally of having colluded with the NSA in producing a
subverted design. I did not.



Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not
provide direct access to the raw unwhitened output.


Y'know what?  Nobody has to accuse anyone of anything.  The result,
no matter how it came about, is that we have a chip whose output
cannot be checked.  That isn't as good as a chip whose output can
be checked.

A well-described physical process does in fact usually have some
off-white characteristics (bias, normal distribution, etc). Being
able to see those characteristics means being able to verify that
the process is as described.  Being able to see also the whitened
output means being able to verify that the whitening is working
correctly.

OTOH, it's going to be more expensive due to the additional pins of
output required, or not as good because whitening will have to be
provided in separate hardware.

Ray
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sep 7, 2013, at 8:06 PM, John Kelsey  wrote:

> There are basically two ways your RNG can be cooked:
> 
> a.  It generates predictable values.  Any good cryptographic PRNG will do 
> this if seeded by an attacker.  Any crypto PRNG seeded with too little 
> entropy can also do this.  
> 
> b.  It leaks its internal state in its output in some encrypted way.  
> Basically any cryptographic processing of the PRNG output is likely to 
> clobber this. 

There's also another way -- that it's a constant PRNG.

For example, take a good crypto PRNG, seed it in manufacturing, and then in its 
life, it just outputs from that fixed state. That fixed state might be secret 
or known to outsiders, but either way, it's a cooked PRNG.

Sadly, there were (are?) some hardware PRNGs on TPMs that were precisely this.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSLLbjsTedWZOD3gYRAhMzAJ93/YEF8mTwdJ/ktl5SiR5IPp4DtwCeIrZh
KHVy+CIpN69GpJNlX0LiKiM=
=i4b8
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Jon Callas
> 3) Shortly after the token indictment of Zimmerman (thus prompting widespread 
> use and promotion of the RSA public key encryption algorithm), the Clinton 
> administration's FBI then advocated a relaxation of encryption export 
> regulations in addition to dropping all plans for the Clipper chip

I need to correct some facts, especially since I'm seeing this continue to get 
repeated.

Phil was never charged, indicted, sued, or anything else. He was 
*investigated*. He was investigated for export violations, not for anything 
else. Being investigated is bad enough, but that's what happened. The 
government dropped the investigation in early 1996.

The government started the investigation because they were responding to a 
complaint from RSADSI that Phil and team violated export control. As Phill 
noted, there was the secondary issue of the dispute over the RSA patent 
license, but that was a separate issue. RSADSI filed the complaint with the 
government that started the investigation.

Jon




PGP.sig
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Ray Dillinger

On 09/08/2013 05:28 AM, Phillip Hallam-Baker wrote:


every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.


Why is that a problem?  GIT is open-source.  I think even *I* might be
good enough to patch that.

Ray


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ray Dillinger

On 09/08/2013 10:13 AM, Thor Lancelot Simon wrote:

On Sat, Sep 07, 2013 at 07:19:09PM -0700, Ray Dillinger wrote:


Given good open-source software, an FPGA implementation would provide greater
assurance of security.


How sure are you that an FPGA would actually be faster than you can already
achieve in software?

Thor


Depends on the operation.  If it's linear, somewhat certain.  If it's
parallizable or streamable, then very certain indeed.

But that's not even the main point.  It's the 'assurance of security' part
that's important, not the speed.  After you've burned something into an
FPGA (by toggle board if necessary) you can trust that FPGA to run the same
algorithm unmodified unless someone has swapped out the physical device.

Given the insecurity of most net-attached operating systems, the same is
simply not true of most software.  Given the insecurity of chip fabs and
their management, the same is not true of special-purpose ASICs.

Ray





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [tor-talk] NIST approved crypto in Tor?

2013-09-08 Thread Ray Dillinger

On 09/08/2013 07:08 AM, Eugen Leitl wrote:


Okay, I need to eat my words here.



I went to review the deterministic procedure ...



The deterministic procedure basically computes SHA1 on some seed and
uses it to assign the parameters then checks the curve order, etc..
wash rinse repeat.



Then I looked at the random seed values for the P-xxxr curves. For
example, P-256r's seed is c49d360886e704936a6678e1139d26b7819f7e90.



... The stated purpose of the "veritably random" procedure "ensures

> that the parameters cannot be predetermined ... and no trapdoors can

have been placed in the parameters during their generation".



Considering the stated purpose I would have expected the seed to be
some small value like ... "6F" and for all smaller values to fail the
test. Anything else would have suggested that ... the parameters
could embody any undisclosed mathematical characteristic whose
rareness is only bounded by how many times they could run sha1 and
test.



Eugene has a very strong point. Clearly we need to replace deployed
instances of those curves.  And just doing that is going to be a
years-long project that takes hundreds of people and won't be fully
(re)deployed for decades. If then. Can we rerun the code starting
at a more reasonable place and see what curves develop?

Good god, we need to re-evaluate *EVERYTHING* that's deployed in the
last 20 years for safety and security, from the standards level down.
This is critical public infrastructure we're talking about here and
we don't even know how much of it has been sabotaged.  By people we
usually trusted, whose mission was to enhance communications security.

Ray





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread ianG

On 8/09/13 16:42 PM, Phillip Hallam-Baker wrote:

Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be
flawed in ways that would make it unsuited for use as a public key
algorithm. For instance being able to compute the private key from the
public or deduce the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
the search space for brute force search from 128 bits to 64 or only
worked on some messages would be enough leverage for intercept purposes
but make it useless as a public key system.



Thanks.  This far better explains the conundrum.  There is a big 
difference between a conceptual public key algorithm, and one that is 
actually good enough to compete with the ones we typically use.



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Peter Bowen
On Sat, Sep 7, 2013 at 6:50 PM, John Gilmore  wrote:
> PS: My long-standing domain registrar (enom.com) STILL doesn't support
> DNSSEC records -- which is why toad.com doesn't have DNSSEC
> protection.  Can anybody recommend a good, cheap, reliable domain
> registrar who DOES update their software to support standards from ten
> years ago?

PIR (the .org registry) has a field in their registrar list indicating
if the registrar supports DNSSEC:
http://www.pir.org/get/registrars?order=field_dnssec_value&sort=desc

If you exclude all the name.com and Go Daddy shell registrars, you
still have more than 30 to choose from.  I would be shocked if they
didn't all offer .com in addition to .org.

Thanks,
Peter
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 1:06 PM, Jerry Leichter wrote:
> There was a proposal out there based on something very much like this to 
> create tamper-evident signatures.  I forget the details - it was a couple of 
> years ago - but the idea was that every time you sign something, you modify 
> your key in some random way, resulting in signatures that are still 
> verifiably yours, but also contain the new random modification.  Beyond that, 
> I don't recall how it worked - it was quite clever... ah, here it is:  
> http://eprint.iacr.org/2005/147.pdf
Spoke too quickly - that paper is something else entirely.  I still can't 
locate the one I was thinking of.
-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 10:45 AM, Ray Dillinger wrote:
>> Pairwise shared secrets are just about the only thing that scales
>> worse than public key distribution by way of PGP key fingerprints on
>> business cards.  
>> If we want secure crypto that can be used by everyone, with minimal
>> trust, public key is the only way to do it.
>> 
>> One pretty sensible thing to do is to remember keys established in
>> previous sessions, and use those combined with the next session.
> 
> You've answered your own conundrum!
> 
> Of course the idea of remembering keys established in previous
> sessions and using them combined with keys negotiated in the next
> session is a scalable way of establishing and updating pairwise
> shared secrets
It's even better than you make out.  If Eve does manage to get hold of the 
Alice's current keys, and uses them to communicate with Bob, *after the 
communication, Bob will have updated his keys - but Alice will not have*.  The 
next time they communicate, they'll know they've been compromised.  That is, 
this is tamper-evident cryptography.

There was a proposal out there based on something very much like this to create 
tamper-evident signatures.  I forget the details - it was a couple of years ago 
- but the idea was that every time you sign something, you modify your key in 
some random way, resulting in signatures that are still verifiably yours, but 
also contain the new random modification.  Beyond that, I don't recall how it 
worked - it was quite clever... ah, here it is:  
http://eprint.iacr.org/2005/147.pdf
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Daniel Cegiełka
Hi,

http://www.youtube.com/watch?v=K8EGA834Nok

Is DNSSEC is really the right solution?

Daniel
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:45 PM, John Kelsey wrote:

> Let's suppose I design a block cipher such that, with a randomly generated 
> key and 10,000 known plaintexts, I can recover that key At this point, 
> what I have is a trapdoor one-way function.  You generate a random key K and 
> then compute E(K,i) for i = 1 to 1.  The output of the one-way function 
> is the ciphertext.  The input is K.  If nobody can break the cipher, then 
> this is a one-way funciton.  If only I, who designed it, can break it, then 
> it's a trapdoor one-way function At this point, I have a perfectly fine 
> public key encryption system.  To send me a message, choose a random K, use 
> it to encrypt 1 through 1, and then send me the actual message encrypted 
> after that in K.  If nobody but me can break the system, then this cipher 
> works as my public key.
OK, let's look at this another way.  The broader argument being made here 
breaks down into three propositions:

1.  If you have a way to "spike" a block cipher based on embedding a secret in 
it, you can a way to create something with the formal properties of a public 
key cryptosystem - i.e., there is a function E(P) which anyone can compute on 
any plaintext P, but given E(P), only you can invert to recover P.

2.  Something with the formal properties of a public key cryptosystem can be 
used as a *practical* public key cryptosystem.

3.  A practical public-key cryptosystem is much more valuable than a way to 
embed a secret in a block cipher, so if anyone came up with the latter, they 
would certainly use it to create the former, as it's been "the holy grail" of 
cryptography for many years to come up with a public key system that didn't 
depend on complex mathematics with uncertain properties.

If we assume these three propositions, and looks around us and observe the lack 
of the appropriate kinds of public key systems, we can certainly conclude that 
no one knows how to embed a secret in a block cipher.

Proposition 1, which is all you specifically address, is certainly true.  I 
claim that Propositions 2 and 3 are clearly false.

In fact, Proposition 3 isn't even vaguely mathematical - it's some kind of 
statement about the values that cryptographers assign to different kinds of 
primitives and to publication.  It's quite true that if anyone in the academic 
world were to come up with a way to create a practical public key cryptosystem 
without a dependence on factoring or DLP, they would publish to much acclaim.  
(Of course, there *are* a couple of such systems known - they were published 
years ago - but no one uses them for various reasons.  So "acclaim" ... well, 
maybe.)  Then again, an academic cryptographer who discovered a way to hide a 
secret in a block cipher would certainly publish - it would be really 
significant work.  So we never needed this whole chain of propositions to begin 
with:  It's self-evidently true that no one in the public community knows how 
to embed a secret in a block cipher.

But ... since we're talking *values*, what are NSA's values?  Would *they* have 
any reason to publish if they found a way to embed a secret in a block cipher? 
Hell, no!  Why would they want to give away such valuable knowledge?  Would 
they produce a private-key system based on their breakthrough?  Maybe, for 
internal use.  How would we ever know?

But let's talk mathematics, not psychology and politics.  You've given a 
description of a kind of back door that *would* produce a practical public key 
system.  But I've elsewhere pointed out that there are all kinds of back doors. 
 Suppose that my back door reduces the effective key size of AES to 40 bits.  
Even 20+ years ago, NSA was willing to export 40-bit crypto; presumably they 
were willing to do the brute-force computation to break it.  Today, it would be 
a piece of cake.  But would a public-key system that requires around 2^40 
operations to encrypt be *practical*?  Even today, I doubt it.  And if you're 
willing to do 2^40 operations, are you willing to do 2^56?  With specialized 
hardware, that, too, has been easy for years.  NSA can certainly have that 
specialized hardware for code breaking - will you buy it for encryption?

> The assumption that matters here is that you know enough cryptanalysis that 
> it would be hard to hide a practical attack from you.  If you don't know 
> about differential cryptanalysis, I can do the master key cryptosystem, but 
> only until you learn about it, at which point you will break my cipher.
In fact, this is an example I was going to give:  In a world in which 
differential crypto isn't known, it *is* a secret that's a back door.  Before 
DC was published, people seriously proposed strengthening DES by using a 
448-bit (I think that's the number) key - just toss the round key computation 
mechanism and provide all the keying for all the rounds.  If that had been 
widely used, NSA would have been able to break it use DC.

Of course we know about DC.

Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Peter Gutmann
Ralph Holz  writes:

>BTW, I do not really agree with your argument it should be done via TLS
>extension.

It's done that way based on discussions on (and mostly off) the TLS list by
various implementers, that was the one that caused the least dissent.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Protecting Private Keys

2013-09-08 Thread Peter Gutmann
"Jeffrey I. Schiller"  writes:

>If I was the NSA, I would be scavenging broken hardware from “interesting”
>venues and purchasing computers for sale in interesting locations. I would be
>particularly interested in stolen computers, as they have likely not been
>wiped.

Just buy second-hand HSMs off eBay, they often haven't been wiped, and the
PINs are conveniently taped to the case.  I have a collection of interesting
keys (or at least keys from interesting places, including government
departments) obtained in this way.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ralph Holz
Hi,

>> BTW, I do not really agree with your argument it should be done via TLS
>> extension.
> 
> It's done that way based on discussions on (and mostly off) the TLS list by
> various implementers, that was the one that caused the least dissent.

I've followed that list for a while. What I find weird is that there
should be much dissent at all. This is about increasing security based
on adding quite well-understood mechanisms. What's to be so opposed to
there?

Does adding some ciphersuites really require an extension, maybe even on
the Standards Track? I shouldn't think so, looking at the RFCs that
already do this, e.g. RFC 5289 for AES-GCM. Just go for an
Informational. FWIW, even HTTPS is Informational.

It really boils down to this: how fast do we want to have it? I spoke to
one of the TACK devs a little while ago, and he told me they'd go for
the IETF, too, but their focus was really on getting the code out and
see an effect before that. The same seems to be true for CT - judging by
their commit frequency in the past weeks, they have similar goals.

I don't think it hurts to let users and operators vote with their feet here.

Ralph
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of "cooperative" end-points, PFS doesn't help

2013-09-08 Thread Marcus D. Leech

On 09/07/2013 06:57 PM, james hughes wrote:


PFS may not be a panacea but does help.

There's no question in my mind that PFS helps.  I have, in the past, 
been very in much favor of turning on PFS support in various protocols, 
when it has

  been available.  And I fully understand what the *purpose* of PFS is.

But it's not entirely clear to me that it will help enough in the 
scenarios under discussion.  If we assume that mostly what NSA are doing 
is acquiring a site
   RSA key (either through "donation" on the part of the site, or 
through factoring or other means), then yes, absolutely, PFS will be a 
significant roadblock.
   If, however, they're getting session-key material (perhaps through 
back-doored software, rather than explicit cooperation by the target 
website), the
   PFS does nothing to help us.  And indeed, that same class of 
compromised site could just as well be leaking plaintext.  Although 
leaking session

   keys is lower-profile.

I think all this amounts to a preamble for a call to think deeply, 
again, about end-to-end encryption.I used OTR on certain chat 
sessions, for example,
  because the consequences of the "server in the middle" disclosing the 
contents of those conversations protected by OTR could have dire 
consequences

  for one of the parties involved.

Jeff Schiller pointed out a little while ago that the crypto-engineering 
community have largely failed to make end-to-end encryption easy to 
use.  There are
  reasons for that, some technical, some political, but it is 
absolutely true that end-to-end encryption, for those cases where "end 
to end" is the obvious
  and natural model, has not significantly materialized on the 
Internet.  Relatively speaking, a handful of crypto-nerds use end-to-end 
schemes for e-mail
  and chat clients, and so on, but the vast majority of the Internet 
user-space?  Not so much.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread Faré
On Sun, Sep 8, 2013 at 9:42 AM, Phillip Hallam-Baker  wrote:
> Two caveats on the commentary about a symmetric key algorithm with a
> trapdoor being a public key algorithm.
>
> 1) The trapdoor need not be a good public key algorithm, it can be flawed in
> ways that would make it unsuited for use as a public key algorithm. For
> instance being able to compute the private key from the public or deduce the
> private key from multiple messages.
>
Then it's not a symmetric key algorithm with a trapdoor, it's just a
broken algorithm.

> 2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced the
> search space for brute force search from 128 bits to 64 or only worked on
> some messages would be enough leverage for intercept purposes but make it
> useless as a public key system.
>
I suppose the idea is that by using the same trapdoor algorithm or
algorithm family
and doubling the key size (e.g. 3DES style), you get a 256-bit
symmetric key system
that can be broken in 2^128 attempts by someone with the system's private key
but 2^256 by someone without. If in your message you then communicate 128 bits
of information about your symmetric key, the guy with the private key
can easily crack your symmetric key, whereas others just can't.
Therefore that's a great public key cryptography system.

—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org
Theists think all gods but theirs are false. Atheists simply don't make an
exception for the last one.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:06 PM, Christian Huitema wrote:

>> Pairwise shared secrets are just about the only thing that scales worse than 
>> public key distribution by way of PGP key fingerprints on business cards.  > 
>> The equivalent of CAs in an all-symmetric world is KDCs  If we want 
>> secure crypto that can be used by everyone, with minimal trust, public key 
>> is the only way to do it.  
> 
> I am certainly not going to advocate Internet-scale KDC. But what if the 
> application does not need to scale more than a "network of friends?"
Indeed, that was exactly what I had in mind when I suggested we might want to 
do without private key cryptography on another stream.

Not every problem needs to be solved on Internet scale.  In designing and 
building cryptographic systems simplicity of design, limitation to purpose, and 
humility are usually more important the universality.  Most of the email 
conversations I have are with people I've corresponded with in the past, or 
somehow related to people I've corresponded with in the past.  In the first 
case, I already have their keys - the only really meaningful notion of "the 
right key" is key continuity (combined with implied verification if we also 
have other channels of communication - if someone manages to slip me a bogus 
key for someone who I talk to every day, I'm going to figure that out very 
quickly.)  In the second case - e.g., an email address from a From field in a 
message on this list - the best I can possibly hope for initially is that I can 
be certain I'm corresponding with whoever sent that message to the list.  
There's no way I can bind that to a particular person in the real world wit
 hout something more.

Universal schemes, when (not if - there's no a single widely fielded system 
that hasn't been found to have serious bugs over its operation lifetime, and I 
don't expect to see one in *my* lifetime) they fail, lead to universal attacks. 
 I need some kind of universal scheme for setting up secure connections to buy 
something from a vendor I never used before, but frankly the NSA doesn't need 
to break into anything to get that information - the vendor, my bank, my CC 
company, credit agencies are call collecting and selling it anyway.

The other thing to keep in mind - and I've come back to this point repeatedly - 
is that the world we are now designing for is very different from the world of 
the mid- to late-1990's when the current schemes were designed.  Disk is so 
large and so cheap that any constraint in the old designs that was based on a 
statement like "doing this would require the user to keep n^2 keys pairs, which 
is too much" just doesn't make any sense any more - certainly not for 
individuals, not even for small organizations:  If n is determined by the 
number of correspondents you have, then squaring it still gives you a small 
number relative to current disk sizes.  Beyond that, everyone today (or in the 
near future) can be assumed to carry with them computing power that rivals or 
exceeds the fastest machines available back in the day - and to have an 
always-on network connection whose speed rivals that of *backbone* links back 
then.

Yes, there are real issues about how much you can trust that computer you carry 
around with you - but after the recent revelations, is the situation all that 
different for the servers you talk to, the routers in the network between you, 
the crypto accelerators many of the services use - hell, every piece of 
hardware and software.  For most people, that will always be the situation:  
They will not be in a position to check their hardware, much less build their 
own stuff from the ground up.  In this situation, about all you can do is try 
to present attackers with as many *different* targets as possible, so that they 
need to split their efforts.  It's guerrilla warfare instead of a massed army.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Symmetric cipher + Backdoor = Public Key System

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 7:56 PM, Perry E. Metzger wrote:
>> I'm not as yet seeing that a block cipher with a backdoor is a public 
>> key system,
> 
> Then read the Blaze & Feigenbaum paper I posted a link to. It makes a
> very good case for that, one that Jerry unaccountably does not seem to
> believe. Blaze seemed to still believe the result as of a few days ago.
I've given quite a bit of argument as to why the result doesn't really say what 
it seems to say.  Feel free to respond to the actual counterexamples I gave, 
rather than simply say I "unaccountably" don't believe the paper.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread james hughes


On Sep 7, 2013, at 6:30 PM, "James A. Donald"  wrote:

> On 2013-09-08 4:36 AM, Ray Dillinger wrote:
>> 
>> But are the standard ECC curves really secure? Schneier sounds like he's got
>> some innovative math in his next paper if he thinks he can show that they
>> aren't.
> 
> Schneier cannot show that they are trapdoored, because he does not know where 
> the magic numbers come from.
> 
> To know if trapdoored, have to know where those magic numbers come from.

That will not work

When the community questioned the source of the DES S boxes, Don Coppersmith 
and Walt Tuchman if IBM at the time openly discussed the how they were 
generated and it still did not quell the suspicion. I bet there are many that 
still believe DES has an yet to be determined backdoor. 

There is no way to prove the absence of a back door, only to prove or argue 
that a backdoor exists with (at least) a demonstration or evidence one is being 
used. Was there any hint in the purloined material to this point? There seems 
to be the opposite. TLS using ECC is not common on the Internet (See "Ron was 
wrong, Whit is right"). If there is a vulnerability in ECC it is not the source 
of today's consternation. (ECC is common on ssh, see "Mining Your Ps and Qs: 
Detection of Widespread Weak Keys in Network Devices")

I will be looking forward to Bruce's next paper.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 12:19 PM, Faré  wrote:

> On Sun, Sep 8, 2013 at 9:42 AM, Phillip Hallam-Baker 
> wrote:
> > Two caveats on the commentary about a symmetric key algorithm with a
> > trapdoor being a public key algorithm.
> >
> > 1) The trapdoor need not be a good public key algorithm, it can be
> flawed in
> > ways that would make it unsuited for use as a public key algorithm. For
> > instance being able to compute the private key from the public or deduce
> the
> > private key from multiple messages.
> >
> Then it's not a symmetric key algorithm with a trapdoor, it's just a
> broken algorithm.


But the compromise may only be visible if you have access to some
cryptographic technique which we don't currently have.

The point I am making is that a backdoor in a symmetric function need not
be a secure public key system, it could be a breakable one. And that is a
much wider class of function than public key cryptosystems. There are many
approaches that were tried before RSA and ECC were settled on.




> > 2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
> the
> > search space for brute force search from 128 bits to 64 or only worked on
> > some messages would be enough leverage for intercept purposes but make it
> > useless as a public key system.
> >
> I suppose the idea is that by using the same trapdoor algorithm or
> algorithm family
> and doubling the key size (e.g. 3DES style), you get a 256-bit
> symmetric key system
> that can be broken in 2^128 attempts by someone with the system's private
> key
> but 2^256 by someone without. If in your message you then communicate 128
> bits
> of information about your symmetric key, the guy with the private key
> can easily crack your symmetric key, whereas others just can't.
> Therefore that's a great public key cryptography system.
>

2^128 is still beyond the reach of brute force.

2^64 and a 128 bit key which is the one we usually use on the other hand...



Perhaps we should do a test, move to 256 bits on a specific date across the
net and see if the power consumption rises near the NSA data centers.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jaap-Henk Hoepman

> 
> Symetric cryptography does a much easier thing. It combines data and some 
> mysterious data (key) in a way that you cannot extract data without the 
> mysterious data from the result. It's like a + b = c. Given c you need b to 
> find a. The tricks that are involved are mostly about sufficiently mixing 
> data, to make sure there's enough possible b's to never guess it correctly 
> and that all those b's have the same chance of being the one b. Preferably 
> even when you have both A and C, but that's really hard. 
> 
> So I'd say Bruce said that in an effort to move to more well understood 
> cryptography. It is also a way to move people towards simply better 
> algorithms, as most public key systems are very, very bad.

Funny. I would have said exactly the opposite: public key crypto is much better 
understood because it is based on mathematical theorems and reductions to 
(admittedly presumed) hard problems, whereas symmetric crypto is really a black 
art that mixes some simple bit wise operations and hopes for the best (yes, I 
know this is a bit of caricature...)

Jaap-Henk
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Does NSA break in to endpoints (was Re: Bruce Schneier has gotten seriously spooked)

2013-09-08 Thread ianG

On 8/09/13 03:00 AM, Perry E. Metzger wrote:

On Sat, 07 Sep 2013 09:33:28 +0100
Brian Gladman  wrote:


On 07/09/2013 01:48, Chris Palmer wrote:

Q: "Could the NSA be intercepting downloads of open-source
encryption software and silently replacing these with their own
versions?"


Why would they perform the attack only for encryption software? They
could compromise people's laptops by spiking any popular app.


Because NSA and GCHQ are much more interested in attacking
communictions in transit rather than attacking endpoints.


Except, one implication of recent revelations is that stealing keys
from endpoints has been a major activity of NSA in the last decade.

I'm not going to claim that altering patches and software during
download has been a major attack vector they've used for that -- I have
no evidence for the contention whatsoever and besides, endpoints seem
to be fairly vulnerable without such games -- but clearly attacking
selected endpoints is now an NSA passtime.



The eye-opener for me was that they were investing and trying in every 
known attack.  They are acting like true economic attackers, try 
everything, and select the one that generates the best ROI.  Just like 
the industrialised phishing/hacking gangs that emerged in the 2000s...




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Eugen Leitl
On Sat, Sep 07, 2013 at 07:42:33PM -1000, Tim Newsham wrote:
> Jumping in to this a little late, but:
> 
> >  Q: "Could the NSA be intercepting downloads of open-source
> > encryption software and silently replacing these with their own versions?"
> >  A: (Schneier) Yes, I believe so.
> 
> perhaps, but they would risk being noticed. Some people check file hashes
> when downloading code. FreeBSD's port system even does it for you and
> I'm sure other package systems do, too.   If this was going on en masse,

There is a specific unit within NSA that attempts to obtain keys not in
the key cache. Obviously, package-signing secrets are extremely valuable,
since they're likely to work for hardened (or so they think) targets.

For convenience reasons the signing secrets are typically not secured.
If something is online you don't even need physical access to obtain it.

The workaround for this is to build packages from source, especially
if there's deterministic build available so that you can check whether
the published binary for public consumption is kosher, and verify
signatures with information obtained out of band. Checking key 
fingeprints on dead tree given in person is inconvenient, and does 
not give you complete trust, but it is much better than just blindly 
install something from online depositories.

> it would get picked up pretty quickly...  If targeted, on the other hand, it
> would work well enough...


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [tor-talk] NIST approved crypto in Tor?

2013-09-08 Thread Eugen Leitl
- Forwarded message from Gregory Maxwell  -

Date: Sun, 8 Sep 2013 06:44:57 -0700
From: Gregory Maxwell 
To: "This mailing list is for all discussion about theory, design, and 
development of Onion Routing."

Subject: Re: [tor-talk] NIST approved crypto in Tor?
Reply-To: tor-t...@lists.torproject.org

On Sat, Sep 7, 2013 at 8:09 PM, Gregory Maxwell  wrote:
> On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward
>  wrote:
>> Bruce Schneier recommends *not* to use ECC. It is safe to assume he
>> knows what he says.
>
> I believe Schneier was being careless there.  The ECC parameter sets
> commonly used on the internet (the NIST P-xxxr ones) were chosen using
> a published deterministically randomized procedure.  I think the
> notion that these parameters could have been maliciously selected is a
> remarkable claim which demands remarkable evidence.

Okay, I need to eat my words here.

I went to review the deterministic procedure because I wanted to see
if I could repoduce the SECP256k1 curve we use in Bitcoin. They don't
give a procedure for the Koblitz curves, but they have far less design
freedom than the non-koblitz so I thought perhaps I'd stumble into it
with the "most obvious" procedure.

The deterministic procedure basically computes SHA1 on some seed and
uses it to assign the parameters then checks the curve order, etc..
wash rinse repeat.

Then I looked at the random seed values for the P-xxxr curves. For
example, P-256r's seed is c49d360886e704936a6678e1139d26b7819f7e90.

_No_ justification is given for that value. The stated purpose of the
"veritably random" procedure "ensures that the parameters cannot be
predetermined. The parameters are therefore extremely unlikely to be
susceptible to future special-purpose attacks, and no trapdoors can
have been placed in the parameters during their generation".

Considering the stated purpose I would have expected the seed to be
some small value like ... "6F" and for all smaller values to fail the
test. Anything else would have suggested that they tested a large
number of values, and thus the parameters could embody any undisclosed
mathematical characteristic whos rareness is only bounded by how many
times they could run sha1 and test.

I now personally consider this to be smoking evidence that the
parameters are cooked. Maybe they were only cooked in ways that make
them stronger? Maybe

SECG also makes a somewhat curious remark:

"The elliptic curve domain parameters over (primes) supplied at each
security level typically consist of examples of two different types of
parameters — one type being parameters associated with a Koblitz curve
and the other type being parameters chosen verifiably at random —
although only verifiably random parameters are supplied at export
strength and at extremely high strength."

The fact that only "verifiably random" are given for export strength
would seem to make more sense if you cynically read "verifiably
random" as backdoored to all heck. (though it could be more innocently
explained that the performance improvements of Koblitz wasn't so
important there, and/or they considered those curves weak enough to
not bother with the extra effort required to produce the Koblitz
curves).
-- 
tor-talk mailing list - tor-t...@lists.torproject.org
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Eugen Leitl
- Forwarded message from "James A. Donald"  -

Date: Sun, 08 Sep 2013 08:34:53 +1000
From: "James A. Donald" 
To: cryptogra...@randombit.net
Subject: Re: [cryptography] Random number generation influenced, HW RNG
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 
Thunderbird/17.0.8
Reply-To: jam...@echeque.com

On 2013-09-08 3:48 AM, David Johnston wrote:
> Claiming the NSA colluded with intel to backdoor RdRand is also to
> accuse me personally of having colluded with the NSA in producing a
> subverted design. I did not.

Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not
provide direct access to the raw unwhitened output.

A decision that even assuming the utmost virtue on the part of the
designers, leaves open the possibility of malfunctions going
undetected.

That is a question a great many people have asked, and we have not
received any answers.

Access to the raw output would have made it possible to determine that
the random numbers were in fact generated by the physical process
described, since it is hard and would cost a lot of silicon to
simulate the various subtle offwhite characteristics of a well
described actual physical process.


___
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Ray Dillinger

On 09/07/2013 07:51 PM, John Kelsey wrote:


Pairwise shared secrets are just about the only thing that scales
worse than public key distribution by way of PGP key fingerprints on
business cards.  
If we want secure crypto that can be used by everyone, with minimal
trust, public key is the only way to do it.

One pretty sensible thing to do is to remember keys established in
previous sessions, and use those combined with the next session.


You've answered your own conundrum!

Of course the idea of remembering keys established in previous
sessions and using them combined with keys negotiated in the next
session is a scalable way of establishing and updating pairwise
shared secrets.

In fact I'd say it's a very good idea.  One can use a distributed
public key (infrastructure fraught with peril and mismanagement)
for introductions, and thereafter communicate using a pairwise
shared secret key (locally managed) which is updated every time
you interact, providing increasing security against anyone who
hasn't monitored and retained *ALL* previous communications. In
order to get at your stash of shared secret keys Eve and Mallory
have to mount an attack on your particular individual machine,
which sort of defeats the "trawl everything by sabotaging vital
infrastructure at crucial points" model that they're trying to
accomplish.

One thing that weakens the threat model (so far) is that storage
is not yet so cheap that Eve can store *EVERYTHING*. If Eve has
to break all previous sessions before she can hand your current
key to Mallory, first her work factor is drastically increased,
second she has to have all those previous sessions stored, and
third, if Alice and Bob have ever managed even one secure exchange
or one exchange that's off the network she controls (say by local
bluetooth link)she fails. Fourth, even if she *can* store everything
and the trawl *has* picked up every session, she still has to guess
*which* of her squintillion stored encrypted sessions were part
of which stream of communications before she knows which ones
she has to break.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be flawed
in ways that would make it unsuited for use as a public key algorithm. For
instance being able to compute the private key from the public or deduce
the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced the
search space for brute force search from 128 bits to 64 or only worked on
some messages would be enough leverage for intercept purposes but make it
useless as a public key system.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Andrea Shepard
On Sat, Sep 07, 2013 at 08:45:34PM -0400, Perry E. Metzger wrote:
> I'm unaware of an ECC equivalent of the Shor algorithm. Could you
> enlighten me on that?

Shor's algorithm is a Fourier transform, essentially.  It can find periods of
a function you can implement as a quantum circuit with only polynomially many
invocations.  In particular, when that function is exponentiation in a group,
it can find the orders of group elements.  This allows finding discrete
logarithms in BQP for any group in which exponentiation is in P.

-- 
Andrea Shepard

PGP fingerprint (ECC): 2D7F 0064 F6B6 7321 0844  A96D E928 4A60 4B20 2EF3
PGP fingerprint (RSA): 7895 9F53 C6D1 2AFD 6344  AF6D 35F3 6FFA CBEC CA80


pgpv_iM3WRwuC.pgp
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 9:50 PM, John Gilmore  wrote:

> > >> First, DNSSEC does not provide confidentiality.  Given that, it's not
> > >> clear to me why the NSA would try to stop or slow its deployment.
>
> DNSSEC authenticates keys that can be used to bootstrap
> confidentiality.  And it does so in a globally distributed, high
> performance, high reliability database that is still without peer in
> the world.
>
> It was never clear to me why DNSSEC took so long to deploy, though
> there was one major moment at an IETF in which a member of the IESG
> told me point blank that Jim Bidzos had made himself so hated that the
> IETF would never approve a standard that required the use of the RSA
> algorithm -- even despite a signed blanket license for use of RSA for
> DNSSEC, and despite the expiration of the patent.  I


No, that part is untrue. I sat at the table with Jeff Schiller and Burt
Kaliski when Burt pitched S/MIME at the IETF. He was Chief Scientist of RSA
Labs at the time.

Jim did go after Phil Z. over PGP initially. But Phil Z. was violating the
patent at the time. That led to RSAREF and the MIT version of PGP.


DNSSEC was (and is) a mess as a standard because it is an attempt to
retrofit a directory designed around some very tight network constraints
and with a very poor architecture to make it into a PKI.

PS: My long-standing domain registrar (enom.com) STILL doesn't support
> DNSSEC records -- which is why toad.com doesn't have DNSSEC
> protection.  Can anybody recommend a good, cheap, reliable domain
> registrar who DOES update their software to support standards from ten
> years ago?


The Registrars are pure marketing operations. Other than GoDaddy which
implemented DNSSEC because they are trying to sell the business and more
tech looks kewl during due diligence, there is not a market demand for
DNSSEC.

One problem is that the Registrars almost invariably sell DNS registrations
at cost or at a loss and make the money up on value added products. In
particular SSL certificates.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 1:42 AM, Tim Newsham  wrote:

> Jumping in to this a little late, but:
>
> >  Q: "Could the NSA be intercepting downloads of open-source
> > encryption software and silently replacing these with their own
> versions?"
> >  A: (Schneier) Yes, I believe so.
>
> perhaps, but they would risk being noticed. Some people check file hashes
> when downloading code. FreeBSD's port system even does it for you and
> I'm sure other package systems do, too.   If this was going on en masse,
> it would get picked up pretty quickly...  If targeted, on the other hand,
> it
> would work well enough...
>

But is the source compromised in the archive?


It think we need a different approach to source code management. Get rid of
user authentication completely, passwords and SSH are both a fragile
approach. Instead every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 10:35 PM, Gregory Perry
wrote:

>  >On 09/07/2013 09:59 PM, Phillip Hallam-Baker wrote:
> >
> >Anyone who thinks Jeff was an NSA mole when he was one of the main people
> behind the MIT version of PGP and the distribution of Kerberos is >talking
> daft.
> >
>  >I think that the influence was rather more subtle and was more directed
> at encouraging choices that would make the crypto hopelessly impractical
> >so people would not use it than in adding backdoors.
> >
>  >
>  >One of the lessons of PRISM is that metadata is very valuable. In
> particular social network analysis. If I know who is talking to whom then I
> have >pretty much 90% of the data needed to wrap up any conspiracy against
> the government. So lets make sure we all use PGP and sign each other's
> >keys...
>
> 1) At the core of the initial PGP distribution authored by Philip R.
> Zimmermann, Jr. was the RSA public key encryption method
>
> 2) At that time, the Clinton administration and his FBI was advocating
> widespread public key escrow mechanisms, in addition to the inclusion of
> the Clipper chip to all telecommunication devices to be used for remote
> "lawful intercepts"
>
> 3) Shortly after the token indictment of Zimmerman (thus prompting
> widespread use and promotion of the RSA public key encryption algorithm),
> the Clinton administration's FBI then advocated a relaxation of encryption
> export regulations in addition to dropping all plans for the Clipper chip
>
> 4) On September 21, 2000, the patent for the RSA public key encryption
> algorithm expired, yet RSA released their open source version of the RSA
> encryption algorithm two weeks prior to their patent's expiry for use
> within the public domain
>
> 5) Based upon the widespread use and public adoption of the RSA public key
> encryption method via the original PGP debacle, RSA (now EMC) could have
> easily adjusted the initial RSA patent term under the auspice of national
> security, which would have guaranteed untold millions (if not billions) of
> additional dollars in revenue to the corporate RSA patent holder
>
> You do the math
>

This is seriously off topic here but the idea that the indictment of Phil
Zimmerman was a token effort is nonsense. I was not accusing Phil Z. of
being a plant.

Not only was Louis Freeh going after Zimmerman for real, he went against
Clinton in revenge for the Clipper chip program being junked. He spent much
of Clinton's second term conspiring with Republicans in Congress to get
Clinton impeached.

Clipper was an NSA initiative that began under Bush or probably even
earlier. They got the incoming administration to endorse it as a fait
accompli.


Snowden and Manning on the other hand... Well I do wonder if this is all
some mind game to get people to secure the Internet against cyberattacks.
But the reason I discount that as a possibility is that what has been
revealed has completely destroyed trust. We can't work with the Federal
Government on information security the way that we did in the past any more.

I think the administration needs to make a downpayment on restoring trust.
They could begin by closing the gulag in Guantanamo.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 8:53 PM, Gregory Perry wrote:

> On 09/07/2013 07:52 PM, Jeffrey I. Schiller wrote:
> > Security fails on the Internet for three important reasons, that have
> > nothing to do with the IETF or the technology per-se (except for point
> > 3).
> >  1.  There is little market for “the good stuff”. When people see that
> >  they have to provide a password to login, they figure they are
> >  safe... In general the consuming public cannot tell the
> >  difference between “good stuff” and snake oil. So when presented
> >  with a $100 “good” solution or a $10 bunch of snake oil, guess
> >  what gets bought.
> The IETF mandates the majority of the standards used on the Internet
> today.


No they do not. There is W3C and OASIS both of which are larger now. And
there has always been IEEE.

And they have no power to mandate anything. In fact one of the things I
have been trying to do is to persuade people that the Canute act commanding
the tides to turn is futile. People need to understand that the IETF does
not have any power to mandate anything and that stakeholders will only
follow standards proposals if they see a value in doing so.




>  If the IETF were truly serious about authenticity and integrity
> and confidentiality of communications on the Internet, then there would
> have been interim ad-hoc link layer encryption built into SMTP
> communications since the end of U.S. encryption export regulations.
>

Like STARTTLS which has been in the standards and deployed for a decade now?



> There would have been an IETF-mandated requirement for Voice over IP
> transport encryption, to provide a comparable set of confidentiality
> with VoIP communications that are inherent to traditional copper-based
> landline telephones.  There would at the very least be ad-hoc (read
> non-PKI integrated) DNSSEC.
>

What on earth is that? DNS is a directory so anything that authenticates
directory attributes is going to be capable of being used as a PKI.



> And then there is this Bitcoin thing.  I say this as an individual that
> doesn't even like Bitcoin.  For the record and clearly off topic, I hate
> Bitcoin with a passion and I believe that the global economic crisis
> could be easily averted by returning to a precious metal standard with
> disparate local economies and currencies, all in direct competition with
> each other for the best possible GDP.
>

The value of all the gold in the world ever mined is $8.2 trillion. The
NASDAQ alone traded $46 trillion last Friday.

There are problems with bitcoin but I would worry rather more about the
fact that the Feds have had no trouble at all shutting down every prior
attempt at establishing a currency of that type and the fact that there is
no anonymity whatsoever.





> So how does Bitcoin exist without the IETF?  In its infancy, millions of
> dollars of transactions are being conducted daily via Bitcoin, and there
> is no IETF involved and no central public key infrastructure to validate
> the papers of the people trading money with each other.  How do you
> counter this Bitcoin thing, especially given your tenure and experience
> at the IETF?


Umm I would suggest that it has more to do with supply and demand and the
fact that there is a large amount of economic activity that is locked out
of the formal banking system (including the entire nation of Iran) that is
willing to pay a significant premium for access to a secondary.


> Nonsense.  Port 25 connects to another port 25 and exchanges a public
> key.  Then a symmetrically keyed tunnel is established.  This is not a
> complex thing, and could have been written into the SMTP RFC decades ago.


RFC 3702 published in 2002.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Lodewijk andré de la porte
Public key depends on high level math. That math has some asymetric
property that we can use to achieve the public-private key relationships.

The problem is that the discovery of smarter math can invalidate the
asymetry and make it more symetrical. This has to do with P=NP, which is
also less trivial than a first explaination makes it seem. If it becomes
even effectively symetrical (P is that) it will stop having the nice
useable property.

Symetric cryptography does a much easier thing. It combines data and some
mysterious data (key) in a way that you cannot extract data without the
mysterious data from the result. It's like a + b = c. Given c you need b to
find a. The tricks that are involved are mostly about sufficiently mixing
data, to make sure there's enough possible b's to never guess it correctly
and that all those b's have the same chance of being the one b. Preferably
even when you have both A and C, but that's really hard.

So I'd say Bruce said that in an effort to move to more well understood
cryptography. It is also a way to move people towards simply better
algorithms, as most public key systems are very, very bad.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Tim Newsham
Jumping in to this a little late, but:

>  Q: "Could the NSA be intercepting downloads of open-source
> encryption software and silently replacing these with their own versions?"
>  A: (Schneier) Yes, I believe so.

perhaps, but they would risk being noticed. Some people check file hashes
when downloading code. FreeBSD's port system even does it for you and
I'm sure other package systems do, too.   If this was going on en masse,
it would get picked up pretty quickly...  If targeted, on the other hand, it
would work well enough...

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography