Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ray Dillinger

On 09/05/2013 07:00 PM, Jon Callas wrote:


I don't think they're actively bad, though. For the purpose they were created 
for --
parallelizable authenticatedencryption -- it serves its purpose. You can have a
decent implementor implement them right in hardware and walk away.


Given some of the things in the Snowden files, I think it has become the case
that one ought not trust any mass-produced crypto hardware.  It is clearly on
the agenda of the NSA to weaken the communications infrastructure of American
and other business, specifically at the level of chip manufacturers.  And
chips are too much of a black-box for anyone to easily inspect and too much
subject to IP/Copyright issues for anyone who does to talk much about what
they find.  Seriously; microplaning, micrography, analysis, and then you get
sued if you talk about what you find?  It's a losing game.

Given good open-source software, an FPGA implementation would provide greater
assurance of security. An FPGA burn-in rig can be built by hand if necessary,
or at the very least manufactured in a way that is subject to visual inspection
(ie, on a one-layer circuit board with dead-simple 7400-series logic chips).
It would be a bit of a throwback these days, but we're deep into whom-can-you-
trust territory at this point and going for lower tech is worth it if it means
tech that you can still inspect and verify.

Bear



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread John Gilmore
  First, DNSSEC does not provide confidentiality.  Given that, it's not
  clear to me why the NSA would try to stop or slow its deployment.

DNSSEC authenticates keys that can be used to bootstrap
confidentiality.  And it does so in a globally distributed, high
performance, high reliability database that is still without peer in
the world.

It was never clear to me why DNSSEC took so long to deploy, though
there was one major moment at an IETF in which a member of the IESG
told me point blank that Jim Bidzos had made himself so hated that the
IETF would never approve a standard that required the use of the RSA
algorithm -- even despite a signed blanket license for use of RSA for
DNSSEC, and despite the expiration of the patent.  I thought it was an
extreme position, and it was very forcefully expressed -- but it was
apparently widely enough shared that the muckety-mucks did force the
standard to go back to the committee and have a second algorithm added
to it (which multiplied the interoperability issues considerably and
caused several years of further delay).

John

PS: My long-standing domain registrar (enom.com) STILL doesn't support
DNSSEC records -- which is why toad.com doesn't have DNSSEC
protection.  Can anybody recommend a good, cheap, reliable domain
registrar who DOES update their software to support standards from ten
years ago?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Ray Dillinger

On 09/06/2013 05:58 PM, Jon Callas wrote:


We know as a mathematical theorem that a block cipher with a back
door *is* a public-key system. It is a very, very, very valuable
thing, and suggests other mathematical secrets about hitherto
unknown ways to make fast, secure public key systems.



I've seen this assertion several times in this thread, but I cannot
help thinking that it depends on what *kind* of backdoor you're
talking about, because there are some cases in which as a crypto
amateur I simply cannot see how the construction of an asymmetric
cipher could be accomplished.

As an example of a backdoor that doesn't obviously permit an
asymmetric-cipher construction, consider a broken cipher that
has 128-bit symmetric keys; but one of these keys (which one
depends on an IV in some non-obvious way that's known to the
attacker) can be used to decrypt any message regardless of the
key used to encrypt it.  However, it is not a valid encryption
key; no matter what you encrypt with it you get the same
ciphertext.

There's a second key (also known to the attacker, given the IV)
which is also an invalid key; it has the property that no
matter what you encrypt or decrypt, you get the same result
(a sort of hash on the IV).

How would someone construct an asymmetric cipher from this?
Or is there some mathematical reason why such a beast as the
hypothetical broken cipher I describe, could not exist?

Bear



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread Bill Stewart

At 12:09 PM 9/7/2013, Chris Palmer wrote:

On Sat, Sep 7, 2013 at 1:33 AM, Brian Gladman b...@gladman.plus.com wrote:

 Why would they perform the attack only for encryption software? They
 could compromise people's laptops by spiking any popular app.

 Because NSA and GCHQ are much more interested in attacking communictions
 in transit rather than attacking endpoints.

So they spike a popular download (security-related apps are less
likely to be popular) with a tiny malware add-on that scans every file
that it can read to see if it's an encryption key, cookie, password


More to the point, spike a popular download with remote-execution malware,
and download spiked patches for important binaries,
so the not-a-collection-target's browser uses known keys
(the opposite of the fortify patch that made 40-bit Mozilla do 128-bit),
and the disk encryption software broadcasts its keys or stashes them 
in plaintext


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread James A. Donald

On 2013-09-08 4:36 AM, Ray Dillinger wrote:


But are the standard ECC curves really secure? Schneier sounds like 
he's got

some innovative math in his next paper if he thinks he can show that they
aren't.


Schneier cannot show that they are trapdoored, because he does not know 
where the magic numbers come from.


To know if trapdoored, have to know where those magic numbers come from.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread John Kelsey
Your cryptosystem should be designed with the assumption that an attacker will 
record all old ciphertexts and try to break it later.  The whole point of 
encryption is to make that attack not scary.  We can never rule out future 
attacks, or secret ones now.  But we can move away from marginal key lengths 
and outdated, weak ciphers.  Getting people to do that is like pulling teeth, 
which is why we're still using RC4, and 1024-bit RSA keys and DH primes.  

--John


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] XORing plaintext with ciphertext

2013-09-08 Thread John Kelsey
It depends on the encryption scheme used.  For a stream cipher (including AES 
in counter or OFB mode), this yields the keystream.  If someone screws up and 
uses the same key and IV twice, you can use knowledge of the first plaintext to 
learn the second.  For other AES chaining modes, it's less scary, though if 
someone reuses their key and IV, knowing plaintext xor ciphertext from the 
first time the key,iv pair was used can reveal some plaintext from the second 
time it was used.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread John Kelsey

On Sep 7, 2013, at 3:25 PM, Christian Huitema huit...@huitema.net wrote:

 Another argument is “minimal dependency.” If you use public key, you depend 
 on both the public key algorithm, to establish the key, and the symmetric key 
 algorithm, to protect the session. If you just use symmetric key, you depend 
 on only one algorithm.
 
 Of course, that means getting pair-wise shared secrets, and protecting them. 
 Whether that’s harder or more fragile than maintaining a key ring is a matter 
 of debate. It is probably more robust than relying on CA.

Pairwise shared secrets are just about the only thing that scales worse than 
public key distribution by way of PGP key fingerprints on business cards.  The 
equivalent of CAs in an all-symmetric world is KDCs.  Instead of having the 
power to enable an active attack on you today, KDCs have the power to enable a 
passive attack on you forever.  If we want secure crypto that can be used by 
everyone, with minimal trust, public key is the only way to do it.  

One pretty sensible thing to do is to remember keys established in previous 
sessions, and use those combined with the next session.  For example, if we do 
Diffie-Hellman today and establish a shared key K, we should both store that 
key, and we should try to reuse it next time as an additional input into our 
KDF.  That is, next time we use Diffie-Hellman to establish K1, then we get 
actual-key = KDF(K1, K, other protocol details).  That means that if even one 
session was established securely, the communications are secure (up to the 
symmetric crypto strength) forevermore.  

 - -- Christian Huitema

--John___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread John Kelsey
There are basically two ways your RNG can be cooked:

a.  It generates predictable values.  Any good cryptographic PRNG will do this 
if seeded by an attacker.  Any crypto PRNG seeded with too little entropy can 
also do this.  

b.  It leaks its internal state in its output in some encrypted way.  Basically 
any cryptographic processing of the PRNG output is likely to clobber this. 

The only fix for (a) is to get enough entropy in your PRNG before generating 
outputs.  I suspect Intel's RNG and most other hardware RNGs are extremely 
likely to be better than any other source of entropy you can get on your 
computer, but you don't have to trust them 100%.  Instead, do whatever OS level 
collection you can, combine that with 256 bits from the Intel RNG, and throw in 
anything else likely to help--ethernet address, IP address, timestamp, anything 
you can get from the local network, etc.  Hash that all and feed it into a 
strong cryptographic PRNG--something like CTR-DRBG or HMAC-DRBG from SP 800-90. 
 If you do that, you will have guarded against both (a) and (b).  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Speaking of EDH (GnuTLS interoperability)

2013-09-08 Thread Viktor Dukhovni

Some of you may have seen my posts to postfix-users and openssl-users,
if so, apologies for the duplication.

  http://archives.neohapsis.com/archives/postfix/2013-09/thread.html#80
  http://www.mail-archive.com/openssl-users@openssl.org/index.html#71903

The short version is that while everyone is busily implementing
EDH, they may run into some interoperability issues.  GnuTLS clients
by default insist on a minimum EDH prime size that is not generally
interoperable (2432 bits).  Since the TLS protocol only negotiates
the use of EDH, but not the prime size (the EDH parameters are
unilaterally announced by the server), this setting, while
cryptographically sound, is rather poor engineering.

The context in which this was discovered is also amusing.  Exim
uses GnuTLS and has a work-around to drop the DH prime floor to
1024-bits, which is interoperable in practice.  Debian however
wanted to improve Exim to make it more secure, so the floor was
raised to 2048-bits in a Debian patch.  As a result STARTTLS from
Debian's Exim (before sanity was restored in Exim 4.80-3 in Debian
wheezy, AFAIK it is still broken in Debian squeeze) fails with Postfix,
Sendmail, and other SMTP servers.

In all probability this stronger version of Exim then needlessly
sends mail without TLS, since with SMTP TLS is typically opportunistic,
and likely after TLS fails delivery is retried in the clear!

-- 
Viktor.

P.S. shameless off-topic plug:  If you want better than opportunistic
TLS for email, consider adopting DNSSEC for your domains and
publishing TLSA RRs for your SMTP servers.  Postfix supports DANE
as of 2.11-20130825.  See

https://tools.ietf.org/html/draft-dukhovni-smtp-opportunistic-tls-01
http://www.postfix.org/TLS_README.html#client_tls_dane

Make sure to publish either IN TLSA 3 1 1 or IN TLSA 2 1 1
certificate associations.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread John Kelsey
Let's suppose I design a block cipher such that, with a randomly generated key 
and 10,000 known plaintexts, I can recover that key.  For this to be useful in 
a world with relatively sophisticated cryptanalysts, I must have confidence 
that it is extremely hard to find my trapdoor, even when you can look closely 
at my cipher's design.   

At this point, what I have is a trapdoor one-way function.  You generate a 
random key K and then compute E(K,i) for i = 1 to 1.  The output of the 
one-way function is the ciphertext.  The input is K.  If nobody can break the 
cipher, then this is a one-way funciton.  If only I, who designed it, can break 
it, then it's a trapdoor one-way function.  

At this point, I have a perfectly fine public key encryption system.  To send 
me a message, choose a random K, use it to encrypt 1 through 1, and then 
send me the actual message encrypted after that in K.  If nobody but me can 
break the system, then this cipher works as my public key.  

The assumption that matters here is that you know enough cryptanalysis that it 
would be hard to hide a practical attack from you.  If you don't know about 
differential cryptanalysis, I can do the master key cryptosystem, but only 
until you learn about it, at which point you will break my cipher.   But if you 
can, say, hide the only good linear characteristics for some cipher in its 
S-boxes in a way that is genuinely intractible for anyone else to find, then 
you have a public key cryptosystem. You can publish the algorithm for hiding 
new linear characteristics in an S-box--this becomes the keypair generation 
algorithm.  The private key is the linear characteristic that lets you break 
the cipher with (say) 1 known plaintexts, the public key is the cipher 
definition.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Christian Huitema
 Pairwise shared secrets are just about the only thing that scales worse than 
 public key distribution by way of PGP key fingerprints on business cards.   
 The equivalent of CAs in an all-symmetric world is KDCs.  Instead of having 
 the power to enable an active attack on you today, KDCs have the power
  to enable a passive attack on you forever.  If we want secure crypto that 
 can be used by everyone, with minimal trust, public key is the only way to do 
 it.  

I am certainly not going to advocate Internet-scale KDC. But what if the 
application does not need to scale more than a network of friends?

-- Christian Huitema

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Tim Newsham
Jumping in to this a little late, but:

  Q: Could the NSA be intercepting downloads of open-source
 encryption software and silently replacing these with their own versions?
  A: (Schneier) Yes, I believe so.

perhaps, but they would risk being noticed. Some people check file hashes
when downloading code. FreeBSD's port system even does it for you and
I'm sure other package systems do, too.   If this was going on en masse,
it would get picked up pretty quickly...  If targeted, on the other hand, it
would work well enough...

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Lodewijk andré de la porte
Public key depends on high level math. That math has some asymetric
property that we can use to achieve the public-private key relationships.

The problem is that the discovery of smarter math can invalidate the
asymetry and make it more symetrical. This has to do with P=NP, which is
also less trivial than a first explaination makes it seem. If it becomes
even effectively symetrical (P is that) it will stop having the nice
useable property.

Symetric cryptography does a much easier thing. It combines data and some
mysterious data (key) in a way that you cannot extract data without the
mysterious data from the result. It's like a + b = c. Given c you need b to
find a. The tricks that are involved are mostly about sufficiently mixing
data, to make sure there's enough possible b's to never guess it correctly
and that all those b's have the same chance of being the one b. Preferably
even when you have both A and C, but that's really hard.

So I'd say Bruce said that in an effort to move to more well understood
cryptography. It is also a way to move people towards simply better
algorithms, as most public key systems are very, very bad.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 8:53 PM, Gregory Perry gregory.pe...@govirtual.tvwrote:

 On 09/07/2013 07:52 PM, Jeffrey I. Schiller wrote:
  Security fails on the Internet for three important reasons, that have
  nothing to do with the IETF or the technology per-se (except for point
  3).
   1.  There is little market for “the good stuff”. When people see that
   they have to provide a password to login, they figure they are
   safe... In general the consuming public cannot tell the
   difference between “good stuff” and snake oil. So when presented
   with a $100 “good” solution or a $10 bunch of snake oil, guess
   what gets bought.
 The IETF mandates the majority of the standards used on the Internet
 today.


No they do not. There is W3C and OASIS both of which are larger now. And
there has always been IEEE.

And they have no power to mandate anything. In fact one of the things I
have been trying to do is to persuade people that the Canute act commanding
the tides to turn is futile. People need to understand that the IETF does
not have any power to mandate anything and that stakeholders will only
follow standards proposals if they see a value in doing so.




  If the IETF were truly serious about authenticity and integrity
 and confidentiality of communications on the Internet, then there would
 have been interim ad-hoc link layer encryption built into SMTP
 communications since the end of U.S. encryption export regulations.


Like STARTTLS which has been in the standards and deployed for a decade now?



 There would have been an IETF-mandated requirement for Voice over IP
 transport encryption, to provide a comparable set of confidentiality
 with VoIP communications that are inherent to traditional copper-based
 landline telephones.  There would at the very least be ad-hoc (read
 non-PKI integrated) DNSSEC.


What on earth is that? DNS is a directory so anything that authenticates
directory attributes is going to be capable of being used as a PKI.



 And then there is this Bitcoin thing.  I say this as an individual that
 doesn't even like Bitcoin.  For the record and clearly off topic, I hate
 Bitcoin with a passion and I believe that the global economic crisis
 could be easily averted by returning to a precious metal standard with
 disparate local economies and currencies, all in direct competition with
 each other for the best possible GDP.


The value of all the gold in the world ever mined is $8.2 trillion. The
NASDAQ alone traded $46 trillion last Friday.

There are problems with bitcoin but I would worry rather more about the
fact that the Feds have had no trouble at all shutting down every prior
attempt at establishing a currency of that type and the fact that there is
no anonymity whatsoever.





 So how does Bitcoin exist without the IETF?  In its infancy, millions of
 dollars of transactions are being conducted daily via Bitcoin, and there
 is no IETF involved and no central public key infrastructure to validate
 the papers of the people trading money with each other.  How do you
 counter this Bitcoin thing, especially given your tenure and experience
 at the IETF?


Umm I would suggest that it has more to do with supply and demand and the
fact that there is a large amount of economic activity that is locked out
of the formal banking system (including the entire nation of Iran) that is
willing to pay a significant premium for access to a secondary.


 Nonsense.  Port 25 connects to another port 25 and exchanges a public
 key.  Then a symmetrically keyed tunnel is established.  This is not a
 complex thing, and could have been written into the SMTP RFC decades ago.


RFC 3702 published in 2002.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 10:35 PM, Gregory Perry
gregory.pe...@govirtual.tvwrote:

  On 09/07/2013 09:59 PM, Phillip Hallam-Baker wrote:
 
 Anyone who thinks Jeff was an NSA mole when he was one of the main people
 behind the MIT version of PGP and the distribution of Kerberos is talking
 daft.
 
  I think that the influence was rather more subtle and was more directed
 at encouraging choices that would make the crypto hopelessly impractical
 so people would not use it than in adding backdoors.
 
  
  One of the lessons of PRISM is that metadata is very valuable. In
 particular social network analysis. If I know who is talking to whom then I
 have pretty much 90% of the data needed to wrap up any conspiracy against
 the government. So lets make sure we all use PGP and sign each other's
 keys...

 1) At the core of the initial PGP distribution authored by Philip R.
 Zimmermann, Jr. was the RSA public key encryption method

 2) At that time, the Clinton administration and his FBI was advocating
 widespread public key escrow mechanisms, in addition to the inclusion of
 the Clipper chip to all telecommunication devices to be used for remote
 lawful intercepts

 3) Shortly after the token indictment of Zimmerman (thus prompting
 widespread use and promotion of the RSA public key encryption algorithm),
 the Clinton administration's FBI then advocated a relaxation of encryption
 export regulations in addition to dropping all plans for the Clipper chip

 4) On September 21, 2000, the patent for the RSA public key encryption
 algorithm expired, yet RSA released their open source version of the RSA
 encryption algorithm two weeks prior to their patent's expiry for use
 within the public domain

 5) Based upon the widespread use and public adoption of the RSA public key
 encryption method via the original PGP debacle, RSA (now EMC) could have
 easily adjusted the initial RSA patent term under the auspice of national
 security, which would have guaranteed untold millions (if not billions) of
 additional dollars in revenue to the corporate RSA patent holder

 You do the math


This is seriously off topic here but the idea that the indictment of Phil
Zimmerman was a token effort is nonsense. I was not accusing Phil Z. of
being a plant.

Not only was Louis Freeh going after Zimmerman for real, he went against
Clinton in revenge for the Clipper chip program being junked. He spent much
of Clinton's second term conspiring with Republicans in Congress to get
Clinton impeached.

Clipper was an NSA initiative that began under Bush or probably even
earlier. They got the incoming administration to endorse it as a fait
accompli.


Snowden and Manning on the other hand... Well I do wonder if this is all
some mind game to get people to secure the Internet against cyberattacks.
But the reason I discount that as a possibility is that what has been
revealed has completely destroyed trust. We can't work with the Federal
Government on information security the way that we did in the past any more.

I think the administration needs to make a downpayment on restoring trust.
They could begin by closing the gulag in Guantanamo.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 1:42 AM, Tim Newsham tim.news...@gmail.com wrote:

 Jumping in to this a little late, but:

   Q: Could the NSA be intercepting downloads of open-source
  encryption software and silently replacing these with their own
 versions?
   A: (Schneier) Yes, I believe so.

 perhaps, but they would risk being noticed. Some people check file hashes
 when downloading code. FreeBSD's port system even does it for you and
 I'm sure other package systems do, too.   If this was going on en masse,
 it would get picked up pretty quickly...  If targeted, on the other hand,
 it
 would work well enough...


But is the source compromised in the archive?


It think we need a different approach to source code management. Get rid of
user authentication completely, passwords and SSH are both a fragile
approach. Instead every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 9:50 PM, John Gilmore g...@toad.com wrote:

   First, DNSSEC does not provide confidentiality.  Given that, it's not
   clear to me why the NSA would try to stop or slow its deployment.

 DNSSEC authenticates keys that can be used to bootstrap
 confidentiality.  And it does so in a globally distributed, high
 performance, high reliability database that is still without peer in
 the world.

 It was never clear to me why DNSSEC took so long to deploy, though
 there was one major moment at an IETF in which a member of the IESG
 told me point blank that Jim Bidzos had made himself so hated that the
 IETF would never approve a standard that required the use of the RSA
 algorithm -- even despite a signed blanket license for use of RSA for
 DNSSEC, and despite the expiration of the patent.  I


No, that part is untrue. I sat at the table with Jeff Schiller and Burt
Kaliski when Burt pitched S/MIME at the IETF. He was Chief Scientist of RSA
Labs at the time.

Jim did go after Phil Z. over PGP initially. But Phil Z. was violating the
patent at the time. That led to RSAREF and the MIT version of PGP.


DNSSEC was (and is) a mess as a standard because it is an attempt to
retrofit a directory designed around some very tight network constraints
and with a very poor architecture to make it into a PKI.

PS: My long-standing domain registrar (enom.com) STILL doesn't support
 DNSSEC records -- which is why toad.com doesn't have DNSSEC
 protection.  Can anybody recommend a good, cheap, reliable domain
 registrar who DOES update their software to support standards from ten
 years ago?


The Registrars are pure marketing operations. Other than GoDaddy which
implemented DNSSEC because they are trying to sell the business and more
tech looks kewl during due diligence, there is not a market demand for
DNSSEC.

One problem is that the Registrars almost invariably sell DNS registrations
at cost or at a loss and make the money up on value added products. In
particular SSL certificates.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Andrea Shepard
On Sat, Sep 07, 2013 at 08:45:34PM -0400, Perry E. Metzger wrote:
 I'm unaware of an ECC equivalent of the Shor algorithm. Could you
 enlighten me on that?

Shor's algorithm is a Fourier transform, essentially.  It can find periods of
a function you can implement as a quantum circuit with only polynomially many
invocations.  In particular, when that function is exponentiation in a group,
it can find the orders of group elements.  This allows finding discrete
logarithms in BQP for any group in which exponentiation is in P.

-- 
Andrea Shepard
and...@persephoneslair.org
PGP fingerprint (ECC): 2D7F 0064 F6B6 7321 0844  A96D E928 4A60 4B20 2EF3
PGP fingerprint (RSA): 7895 9F53 C6D1 2AFD 6344  AF6D 35F3 6FFA CBEC CA80


pgpv_iM3WRwuC.pgp
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be flawed
in ways that would make it unsuited for use as a public key algorithm. For
instance being able to compute the private key from the public or deduce
the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced the
search space for brute force search from 128 bits to 64 or only worked on
some messages would be enough leverage for intercept purposes but make it
useless as a public key system.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Ray Dillinger

On 09/07/2013 07:51 PM, John Kelsey wrote:


Pairwise shared secrets are just about the only thing that scales
worse than public key distribution by way of PGP key fingerprints on
business cards.  
If we want secure crypto that can be used by everyone, with minimal
trust, public key is the only way to do it.

One pretty sensible thing to do is to remember keys established in
previous sessions, and use those combined with the next session.


You've answered your own conundrum!

Of course the idea of remembering keys established in previous
sessions and using them combined with keys negotiated in the next
session is a scalable way of establishing and updating pairwise
shared secrets.

In fact I'd say it's a very good idea.  One can use a distributed
public key (infrastructure fraught with peril and mismanagement)
for introductions, and thereafter communicate using a pairwise
shared secret key (locally managed) which is updated every time
you interact, providing increasing security against anyone who
hasn't monitored and retained *ALL* previous communications. In
order to get at your stash of shared secret keys Eve and Mallory
have to mount an attack on your particular individual machine,
which sort of defeats the trawl everything by sabotaging vital
infrastructure at crucial points model that they're trying to
accomplish.

One thing that weakens the threat model (so far) is that storage
is not yet so cheap that Eve can store *EVERYTHING*. If Eve has
to break all previous sessions before she can hand your current
key to Mallory, first her work factor is drastically increased,
second she has to have all those previous sessions stored, and
third, if Alice and Bob have ever managed even one secure exchange
or one exchange that's off the network she controls (say by local
bluetooth link)she fails. Fourth, even if she *can* store everything
and the trawl *has* picked up every session, she still has to guess
*which* of her squintillion stored encrypted sessions were part
of which stream of communications before she knows which ones
she has to break.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Eugen Leitl
- Forwarded message from James A. Donald jam...@echeque.com -

Date: Sun, 08 Sep 2013 08:34:53 +1000
From: James A. Donald jam...@echeque.com
To: cryptogra...@randombit.net
Subject: Re: [cryptography] Random number generation influenced, HW RNG
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 
Thunderbird/17.0.8
Reply-To: jam...@echeque.com

On 2013-09-08 3:48 AM, David Johnston wrote:
 Claiming the NSA colluded with intel to backdoor RdRand is also to
 accuse me personally of having colluded with the NSA in producing a
 subverted design. I did not.

Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not
provide direct access to the raw unwhitened output.

A decision that even assuming the utmost virtue on the part of the
designers, leaves open the possibility of malfunctions going
undetected.

That is a question a great many people have asked, and we have not
received any answers.

Access to the raw output would have made it possible to determine that
the random numbers were in fact generated by the physical process
described, since it is hard and would cost a lot of silicon to
simulate the various subtle offwhite characteristics of a well
described actual physical process.


___
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [tor-talk] NIST approved crypto in Tor?

2013-09-08 Thread Eugen Leitl
- Forwarded message from Gregory Maxwell gmaxw...@gmail.com -

Date: Sun, 8 Sep 2013 06:44:57 -0700
From: Gregory Maxwell gmaxw...@gmail.com
To: This mailing list is for all discussion about theory, design, and 
development of Onion Routing.
tor-t...@lists.torproject.org
Subject: Re: [tor-talk] NIST approved crypto in Tor?
Reply-To: tor-t...@lists.torproject.org

On Sat, Sep 7, 2013 at 8:09 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward
 anonymous.cow...@posteo.de wrote:
 Bruce Schneier recommends *not* to use ECC. It is safe to assume he
 knows what he says.

 I believe Schneier was being careless there.  The ECC parameter sets
 commonly used on the internet (the NIST P-xxxr ones) were chosen using
 a published deterministically randomized procedure.  I think the
 notion that these parameters could have been maliciously selected is a
 remarkable claim which demands remarkable evidence.

Okay, I need to eat my words here.

I went to review the deterministic procedure because I wanted to see
if I could repoduce the SECP256k1 curve we use in Bitcoin. They don't
give a procedure for the Koblitz curves, but they have far less design
freedom than the non-koblitz so I thought perhaps I'd stumble into it
with the most obvious procedure.

The deterministic procedure basically computes SHA1 on some seed and
uses it to assign the parameters then checks the curve order, etc..
wash rinse repeat.

Then I looked at the random seed values for the P-xxxr curves. For
example, P-256r's seed is c49d360886e704936a6678e1139d26b7819f7e90.

_No_ justification is given for that value. The stated purpose of the
veritably random procedure ensures that the parameters cannot be
predetermined. The parameters are therefore extremely unlikely to be
susceptible to future special-purpose attacks, and no trapdoors can
have been placed in the parameters during their generation.

Considering the stated purpose I would have expected the seed to be
some small value like ... 6F and for all smaller values to fail the
test. Anything else would have suggested that they tested a large
number of values, and thus the parameters could embody any undisclosed
mathematical characteristic whos rareness is only bounded by how many
times they could run sha1 and test.

I now personally consider this to be smoking evidence that the
parameters are cooked. Maybe they were only cooked in ways that make
them stronger? Maybe

SECG also makes a somewhat curious remark:

The elliptic curve domain parameters over (primes) supplied at each
security level typically consist of examples of two different types of
parameters — one type being parameters associated with a Koblitz curve
and the other type being parameters chosen verifiably at random —
although only verifiably random parameters are supplied at export
strength and at extremely high strength.

The fact that only verifiably random are given for export strength
would seem to make more sense if you cynically read verifiably
random as backdoored to all heck. (though it could be more innocently
explained that the performance improvements of Koblitz wasn't so
important there, and/or they considered those curves weak enough to
not bother with the extra effort required to produce the Koblitz
curves).
-- 
tor-talk mailing list - tor-t...@lists.torproject.org
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Eugen Leitl
On Sat, Sep 07, 2013 at 07:42:33PM -1000, Tim Newsham wrote:
 Jumping in to this a little late, but:
 
   Q: Could the NSA be intercepting downloads of open-source
  encryption software and silently replacing these with their own versions?
   A: (Schneier) Yes, I believe so.
 
 perhaps, but they would risk being noticed. Some people check file hashes
 when downloading code. FreeBSD's port system even does it for you and
 I'm sure other package systems do, too.   If this was going on en masse,

There is a specific unit within NSA that attempts to obtain keys not in
the key cache. Obviously, package-signing secrets are extremely valuable,
since they're likely to work for hardened (or so they think) targets.

For convenience reasons the signing secrets are typically not secured.
If something is online you don't even need physical access to obtain it.

The workaround for this is to build packages from source, especially
if there's deterministic build available so that you can check whether
the published binary for public consumption is kosher, and verify
signatures with information obtained out of band. Checking key 
fingeprints on dead tree given in person is inconvenient, and does 
not give you complete trust, but it is much better than just blindly 
install something from online depositories.

 it would get picked up pretty quickly...  If targeted, on the other hand, it
 would work well enough...


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jaap-Henk Hoepman

 
 Symetric cryptography does a much easier thing. It combines data and some 
 mysterious data (key) in a way that you cannot extract data without the 
 mysterious data from the result. It's like a + b = c. Given c you need b to 
 find a. The tricks that are involved are mostly about sufficiently mixing 
 data, to make sure there's enough possible b's to never guess it correctly 
 and that all those b's have the same chance of being the one b. Preferably 
 even when you have both A and C, but that's really hard. 
 
 So I'd say Bruce said that in an effort to move to more well understood 
 cryptography. It is also a way to move people towards simply better 
 algorithms, as most public key systems are very, very bad.

Funny. I would have said exactly the opposite: public key crypto is much better 
understood because it is based on mathematical theorems and reductions to 
(admittedly presumed) hard problems, whereas symmetric crypto is really a black 
art that mixes some simple bit wise operations and hopes for the best (yes, I 
know this is a bit of caricature...)

Jaap-Henk
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 12:19 PM, Faré fah...@gmail.com wrote:

 On Sun, Sep 8, 2013 at 9:42 AM, Phillip Hallam-Baker hal...@gmail.com
 wrote:
  Two caveats on the commentary about a symmetric key algorithm with a
  trapdoor being a public key algorithm.
 
  1) The trapdoor need not be a good public key algorithm, it can be
 flawed in
  ways that would make it unsuited for use as a public key algorithm. For
  instance being able to compute the private key from the public or deduce
 the
  private key from multiple messages.
 
 Then it's not a symmetric key algorithm with a trapdoor, it's just a
 broken algorithm.


But the compromise may only be visible if you have access to some
cryptographic technique which we don't currently have.

The point I am making is that a backdoor in a symmetric function need not
be a secure public key system, it could be a breakable one. And that is a
much wider class of function than public key cryptosystems. There are many
approaches that were tried before RSA and ECC were settled on.




  2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
 the
  search space for brute force search from 128 bits to 64 or only worked on
  some messages would be enough leverage for intercept purposes but make it
  useless as a public key system.
 
 I suppose the idea is that by using the same trapdoor algorithm or
 algorithm family
 and doubling the key size (e.g. 3DES style), you get a 256-bit
 symmetric key system
 that can be broken in 2^128 attempts by someone with the system's private
 key
 but 2^256 by someone without. If in your message you then communicate 128
 bits
 of information about your symmetric key, the guy with the private key
 can easily crack your symmetric key, whereas others just can't.
 Therefore that's a great public key cryptography system.


2^128 is still beyond the reach of brute force.

2^64 and a 128 bit key which is the one we usually use on the other hand...



Perhaps we should do a test, move to 256 bits on a specific date across the
net and see if the power consumption rises near the NSA data centers.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread james hughes


On Sep 7, 2013, at 6:30 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-09-08 4:36 AM, Ray Dillinger wrote:
 
 But are the standard ECC curves really secure? Schneier sounds like he's got
 some innovative math in his next paper if he thinks he can show that they
 aren't.
 
 Schneier cannot show that they are trapdoored, because he does not know where 
 the magic numbers come from.
 
 To know if trapdoored, have to know where those magic numbers come from.

That will not work

When the community questioned the source of the DES S boxes, Don Coppersmith 
and Walt Tuchman if IBM at the time openly discussed the how they were 
generated and it still did not quell the suspicion. I bet there are many that 
still believe DES has an yet to be determined backdoor. 

There is no way to prove the absence of a back door, only to prove or argue 
that a backdoor exists with (at least) a demonstration or evidence one is being 
used. Was there any hint in the purloined material to this point? There seems 
to be the opposite. TLS using ECC is not common on the Internet (See Ron was 
wrong, Whit is right). If there is a vulnerability in ECC it is not the source 
of today's consternation. (ECC is common on ssh, see Mining Your Ps and Qs: 
Detection of Widespread Weak Keys in Network Devices)

I will be looking forward to Bruce's next paper.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Symmetric cipher + Backdoor = Public Key System

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 7:56 PM, Perry E. Metzger wrote:
 I'm not as yet seeing that a block cipher with a backdoor is a public 
 key system,
 
 Then read the Blaze  Feigenbaum paper I posted a link to. It makes a
 very good case for that, one that Jerry unaccountably does not seem to
 believe. Blaze seemed to still believe the result as of a few days ago.
I've given quite a bit of argument as to why the result doesn't really say what 
it seems to say.  Feel free to respond to the actual counterexamples I gave, 
rather than simply say I unaccountably don't believe the paper.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:06 PM, Christian Huitema wrote:

 Pairwise shared secrets are just about the only thing that scales worse than 
 public key distribution by way of PGP key fingerprints on business cards.   
 The equivalent of CAs in an all-symmetric world is KDCs  If we want 
 secure crypto that can be used by everyone, with minimal trust, public key 
 is the only way to do it.  
 
 I am certainly not going to advocate Internet-scale KDC. But what if the 
 application does not need to scale more than a network of friends?
Indeed, that was exactly what I had in mind when I suggested we might want to 
do without private key cryptography on another stream.

Not every problem needs to be solved on Internet scale.  In designing and 
building cryptographic systems simplicity of design, limitation to purpose, and 
humility are usually more important the universality.  Most of the email 
conversations I have are with people I've corresponded with in the past, or 
somehow related to people I've corresponded with in the past.  In the first 
case, I already have their keys - the only really meaningful notion of the 
right key is key continuity (combined with implied verification if we also 
have other channels of communication - if someone manages to slip me a bogus 
key for someone who I talk to every day, I'm going to figure that out very 
quickly.)  In the second case - e.g., an email address from a From field in a 
message on this list - the best I can possibly hope for initially is that I can 
be certain I'm corresponding with whoever sent that message to the list.  
There's no way I can bind that to a particular person in the real world wit
 hout something more.

Universal schemes, when (not if - there's no a single widely fielded system 
that hasn't been found to have serious bugs over its operation lifetime, and I 
don't expect to see one in *my* lifetime) they fail, lead to universal attacks. 
 I need some kind of universal scheme for setting up secure connections to buy 
something from a vendor I never used before, but frankly the NSA doesn't need 
to break into anything to get that information - the vendor, my bank, my CC 
company, credit agencies are call collecting and selling it anyway.

The other thing to keep in mind - and I've come back to this point repeatedly - 
is that the world we are now designing for is very different from the world of 
the mid- to late-1990's when the current schemes were designed.  Disk is so 
large and so cheap that any constraint in the old designs that was based on a 
statement like doing this would require the user to keep n^2 keys pairs, which 
is too much just doesn't make any sense any more - certainly not for 
individuals, not even for small organizations:  If n is determined by the 
number of correspondents you have, then squaring it still gives you a small 
number relative to current disk sizes.  Beyond that, everyone today (or in the 
near future) can be assumed to carry with them computing power that rivals or 
exceeds the fastest machines available back in the day - and to have an 
always-on network connection whose speed rivals that of *backbone* links back 
then.

Yes, there are real issues about how much you can trust that computer you carry 
around with you - but after the recent revelations, is the situation all that 
different for the servers you talk to, the routers in the network between you, 
the crypto accelerators many of the services use - hell, every piece of 
hardware and software.  For most people, that will always be the situation:  
They will not be in a position to check their hardware, much less build their 
own stuff from the ground up.  In this situation, about all you can do is try 
to present attackers with as many *different* targets as possible, so that they 
need to split their efforts.  It's guerrilla warfare instead of a massed army.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread Marcus D. Leech

On 09/07/2013 06:57 PM, james hughes wrote:


PFS may not be a panacea but does help.

There's no question in my mind that PFS helps.  I have, in the past, 
been very in much favor of turning on PFS support in various protocols, 
when it has

  been available.  And I fully understand what the *purpose* of PFS is.

But it's not entirely clear to me that it will help enough in the 
scenarios under discussion.  If we assume that mostly what NSA are doing 
is acquiring a site
   RSA key (either through donation on the part of the site, or 
through factoring or other means), then yes, absolutely, PFS will be a 
significant roadblock.
   If, however, they're getting session-key material (perhaps through 
back-doored software, rather than explicit cooperation by the target 
website), the
   PFS does nothing to help us.  And indeed, that same class of 
compromised site could just as well be leaking plaintext.  Although 
leaking session

   keys is lower-profile.

I think all this amounts to a preamble for a call to think deeply, 
again, about end-to-end encryption.I used OTR on certain chat 
sessions, for example,
  because the consequences of the server in the middle disclosing the 
contents of those conversations protected by OTR could have dire 
consequences

  for one of the parties involved.

Jeff Schiller pointed out a little while ago that the crypto-engineering 
community have largely failed to make end-to-end encryption easy to 
use.  There are
  reasons for that, some technical, some political, but it is 
absolutely true that end-to-end encryption, for those cases where end 
to end is the obvious
  and natural model, has not significantly materialized on the 
Internet.  Relatively speaking, a handful of crypto-nerds use end-to-end 
schemes for e-mail
  and chat clients, and so on, but the vast majority of the Internet 
user-space?  Not so much.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ralph Holz
Hi,

 BTW, I do not really agree with your argument it should be done via TLS
 extension.
 
 It's done that way based on discussions on (and mostly off) the TLS list by
 various implementers, that was the one that caused the least dissent.

I've followed that list for a while. What I find weird is that there
should be much dissent at all. This is about increasing security based
on adding quite well-understood mechanisms. What's to be so opposed to
there?

Does adding some ciphersuites really require an extension, maybe even on
the Standards Track? I shouldn't think so, looking at the RFCs that
already do this, e.g. RFC 5289 for AES-GCM. Just go for an
Informational. FWIW, even HTTPS is Informational.

It really boils down to this: how fast do we want to have it? I spoke to
one of the TACK devs a little while ago, and he told me they'd go for
the IETF, too, but their focus was really on getting the code out and
see an effect before that. The same seems to be true for CT - judging by
their commit frequency in the past weeks, they have similar goals.

I don't think it hurts to let users and operators vote with their feet here.

Ralph
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Protecting Private Keys

2013-09-08 Thread Peter Gutmann
Jeffrey I. Schiller j...@mit.edu writes:

If I was the NSA, I would be scavenging broken hardware from “interesting”
venues and purchasing computers for sale in interesting locations. I would be
particularly interested in stolen computers, as they have likely not been
wiped.

Just buy second-hand HSMs off eBay, they often haven't been wiped, and the
PINs are conveniently taped to the case.  I have a collection of interesting
keys (or at least keys from interesting places, including government
departments) obtained in this way.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Jerry Leichter
On Sep 7, 2013, at 11:45 PM, John Kelsey wrote:

 Let's suppose I design a block cipher such that, with a randomly generated 
 key and 10,000 known plaintexts, I can recover that key At this point, 
 what I have is a trapdoor one-way function.  You generate a random key K and 
 then compute E(K,i) for i = 1 to 1.  The output of the one-way function 
 is the ciphertext.  The input is K.  If nobody can break the cipher, then 
 this is a one-way funciton.  If only I, who designed it, can break it, then 
 it's a trapdoor one-way function At this point, I have a perfectly fine 
 public key encryption system.  To send me a message, choose a random K, use 
 it to encrypt 1 through 1, and then send me the actual message encrypted 
 after that in K.  If nobody but me can break the system, then this cipher 
 works as my public key.
OK, let's look at this another way.  The broader argument being made here 
breaks down into three propositions:

1.  If you have a way to spike a block cipher based on embedding a secret in 
it, you can a way to create something with the formal properties of a public 
key cryptosystem - i.e., there is a function E(P) which anyone can compute on 
any plaintext P, but given E(P), only you can invert to recover P.

2.  Something with the formal properties of a public key cryptosystem can be 
used as a *practical* public key cryptosystem.

3.  A practical public-key cryptosystem is much more valuable than a way to 
embed a secret in a block cipher, so if anyone came up with the latter, they 
would certainly use it to create the former, as it's been the holy grail of 
cryptography for many years to come up with a public key system that didn't 
depend on complex mathematics with uncertain properties.

If we assume these three propositions, and looks around us and observe the lack 
of the appropriate kinds of public key systems, we can certainly conclude that 
no one knows how to embed a secret in a block cipher.

Proposition 1, which is all you specifically address, is certainly true.  I 
claim that Propositions 2 and 3 are clearly false.

In fact, Proposition 3 isn't even vaguely mathematical - it's some kind of 
statement about the values that cryptographers assign to different kinds of 
primitives and to publication.  It's quite true that if anyone in the academic 
world were to come up with a way to create a practical public key cryptosystem 
without a dependence on factoring or DLP, they would publish to much acclaim.  
(Of course, there *are* a couple of such systems known - they were published 
years ago - but no one uses them for various reasons.  So acclaim ... well, 
maybe.)  Then again, an academic cryptographer who discovered a way to hide a 
secret in a block cipher would certainly publish - it would be really 
significant work.  So we never needed this whole chain of propositions to begin 
with:  It's self-evidently true that no one in the public community knows how 
to embed a secret in a block cipher.

But ... since we're talking *values*, what are NSA's values?  Would *they* have 
any reason to publish if they found a way to embed a secret in a block cipher? 
Hell, no!  Why would they want to give away such valuable knowledge?  Would 
they produce a private-key system based on their breakthrough?  Maybe, for 
internal use.  How would we ever know?

But let's talk mathematics, not psychology and politics.  You've given a 
description of a kind of back door that *would* produce a practical public key 
system.  But I've elsewhere pointed out that there are all kinds of back doors. 
 Suppose that my back door reduces the effective key size of AES to 40 bits.  
Even 20+ years ago, NSA was willing to export 40-bit crypto; presumably they 
were willing to do the brute-force computation to break it.  Today, it would be 
a piece of cake.  But would a public-key system that requires around 2^40 
operations to encrypt be *practical*?  Even today, I doubt it.  And if you're 
willing to do 2^40 operations, are you willing to do 2^56?  With specialized 
hardware, that, too, has been easy for years.  NSA can certainly have that 
specialized hardware for code breaking - will you buy it for encryption?

 The assumption that matters here is that you know enough cryptanalysis that 
 it would be hard to hide a practical attack from you.  If you don't know 
 about differential cryptanalysis, I can do the master key cryptosystem, but 
 only until you learn about it, at which point you will break my cipher.
In fact, this is an example I was going to give:  In a world in which 
differential crypto isn't known, it *is* a secret that's a back door.  Before 
DC was published, people seriously proposed strengthening DES by using a 
448-bit (I think that's the number) key - just toss the round key computation 
mechanism and provide all the keying for all the rounds.  If that had been 
widely used, NSA would have been able to break it use DC.

Of course we know about DC.  But the only 

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Daniel Cegiełka
Hi,

http://www.youtube.com/watch?v=K8EGA834Nok

Is DNSSEC is really the right solution?

Daniel
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 10:45 AM, Ray Dillinger wrote:
 Pairwise shared secrets are just about the only thing that scales
 worse than public key distribution by way of PGP key fingerprints on
 business cards.  
 If we want secure crypto that can be used by everyone, with minimal
 trust, public key is the only way to do it.
 
 One pretty sensible thing to do is to remember keys established in
 previous sessions, and use those combined with the next session.
 
 You've answered your own conundrum!
 
 Of course the idea of remembering keys established in previous
 sessions and using them combined with keys negotiated in the next
 session is a scalable way of establishing and updating pairwise
 shared secrets
It's even better than you make out.  If Eve does manage to get hold of the 
Alice's current keys, and uses them to communicate with Bob, *after the 
communication, Bob will have updated his keys - but Alice will not have*.  The 
next time they communicate, they'll know they've been compromised.  That is, 
this is tamper-evident cryptography.

There was a proposal out there based on something very much like this to create 
tamper-evident signatures.  I forget the details - it was a couple of years ago 
- but the idea was that every time you sign something, you modify your key in 
some random way, resulting in signatures that are still verifiably yours, but 
also contain the new random modification.  Beyond that, I don't recall how it 
worked - it was quite clever... ah, here it is:  
http://eprint.iacr.org/2005/147.pdf
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Peter Bowen
On Sat, Sep 7, 2013 at 6:50 PM, John Gilmore g...@toad.com wrote:
 PS: My long-standing domain registrar (enom.com) STILL doesn't support
 DNSSEC records -- which is why toad.com doesn't have DNSSEC
 protection.  Can anybody recommend a good, cheap, reliable domain
 registrar who DOES update their software to support standards from ten
 years ago?

PIR (the .org registry) has a field in their registrar list indicating
if the registrar supports DNSSEC:
http://www.pir.org/get/registrars?order=field_dnssec_valuesort=desc

If you exclude all the name.com and Go Daddy shell registrars, you
still have more than 30 to choose from.  I would be shocked if they
didn't all offer .com in addition to .org.

Thanks,
Peter
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread ianG

On 8/09/13 16:42 PM, Phillip Hallam-Baker wrote:

Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be
flawed in ways that would make it unsuited for use as a public key
algorithm. For instance being able to compute the private key from the
public or deduce the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
the search space for brute force search from 128 bits to 64 or only
worked on some messages would be enough leverage for intercept purposes
but make it useless as a public key system.



Thanks.  This far better explains the conundrum.  There is a big 
difference between a conceptual public key algorithm, and one that is 
actually good enough to compete with the ones we typically use.



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Ray Dillinger

On 09/08/2013 05:28 AM, Phillip Hallam-Baker wrote:


every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.


Why is that a problem?  GIT is open-source.  I think even *I* might be
good enough to patch that.

Ray


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Ray Dillinger

On 09/08/2013 10:13 AM, Thor Lancelot Simon wrote:

On Sat, Sep 07, 2013 at 07:19:09PM -0700, Ray Dillinger wrote:


Given good open-source software, an FPGA implementation would provide greater
assurance of security.


How sure are you that an FPGA would actually be faster than you can already
achieve in software?

Thor


Depends on the operation.  If it's linear, somewhat certain.  If it's
parallizable or streamable, then very certain indeed.

But that's not even the main point.  It's the 'assurance of security' part
that's important, not the speed.  After you've burned something into an
FPGA (by toggle board if necessary) you can trust that FPGA to run the same
algorithm unmodified unless someone has swapped out the physical device.

Given the insecurity of most net-attached operating systems, the same is
simply not true of most software.  Given the insecurity of chip fabs and
their management, the same is not true of special-purpose ASICs.

Ray





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Jon Callas
 3) Shortly after the token indictment of Zimmerman (thus prompting widespread 
 use and promotion of the RSA public key encryption algorithm), the Clinton 
 administration's FBI then advocated a relaxation of encryption export 
 regulations in addition to dropping all plans for the Clipper chip

I need to correct some facts, especially since I'm seeing this continue to get 
repeated.

Phil was never charged, indicted, sued, or anything else. He was 
*investigated*. He was investigated for export violations, not for anything 
else. Being investigated is bad enough, but that's what happened. The 
government dropped the investigation in early 1996.

The government started the investigation because they were responding to a 
complaint from RSADSI that Phil and team violated export control. As Phill 
noted, there was the secondary issue of the dispute over the RSA patent 
license, but that was a separate issue. RSADSI filed the complaint with the 
government that started the investigation.

Jon




PGP.sig
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Ray Dillinger

On 09/08/2013 04:27 AM, Eugen Leitl wrote:


On 2013-09-08 3:48 AM, David Johnston wrote:

Claiming the NSA colluded with intel to backdoor RdRand is also to
accuse me personally of having colluded with the NSA in producing a
subverted design. I did not.



Well, since you personally did this, would you care to explain the
very strange design decision to whiten the numbers on chip, and not
provide direct access to the raw unwhitened output.


Y'know what?  Nobody has to accuse anyone of anything.  The result,
no matter how it came about, is that we have a chip whose output
cannot be checked.  That isn't as good as a chip whose output can
be checked.

A well-described physical process does in fact usually have some
off-white characteristics (bias, normal distribution, etc). Being
able to see those characteristics means being able to verify that
the process is as described.  Being able to see also the whitened
output means being able to verify that the whitening is working
correctly.

OTOH, it's going to be more expensive due to the additional pins of
output required, or not as good because whitening will have to be
provided in separate hardware.

Ray
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Sep 7, 2013, at 8:06 PM, John Kelsey crypto@gmail.com wrote:

 There are basically two ways your RNG can be cooked:
 
 a.  It generates predictable values.  Any good cryptographic PRNG will do 
 this if seeded by an attacker.  Any crypto PRNG seeded with too little 
 entropy can also do this.  
 
 b.  It leaks its internal state in its output in some encrypted way.  
 Basically any cryptographic processing of the PRNG output is likely to 
 clobber this. 

There's also another way -- that it's a constant PRNG.

For example, take a good crypto PRNG, seed it in manufacturing, and then in its 
life, it just outputs from that fixed state. That fixed state might be secret 
or known to outsiders, but either way, it's a cooked PRNG.

Sadly, there were (are?) some hardware PRNGs on TPMs that were precisely this.

Jon



-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSLLbjsTedWZOD3gYRAhMzAJ93/YEF8mTwdJ/ktl5SiR5IPp4DtwCeIrZh
KHVy+CIpN69GpJNlX0LiKiM=
=i4b8
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Why are some protocols hard to deploy? (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread Perry E. Metzger
On Sat, 07 Sep 2013 18:50:06 -0700 John Gilmore g...@toad.com wrote:
 It was never clear to me why DNSSEC took so long to deploy,
[...]
 PS: My long-standing domain registrar (enom.com) STILL doesn't
 support DNSSEC records -- which is why toad.com doesn't have DNSSEC
 protection.  Can anybody recommend a good, cheap, reliable domain
 registrar who DOES update their software to support standards from
 ten years ago?

I believe you have answered your own question there, John. Even if we
assume subversion, deployment requires cooperation from too many
people to be fast.

One reason I think it would be good to have future key management
protocols based on very lightweight mechanisms that do not require
assistance from site administrators to deploy is that it makes it
ever so much easier for things to get off the ground. SSH deployed
fast because one didn't need anyone's cooperation to use it -- if you
had root on a server and wanted to log in to it securely, you could
be up and running in minutes.

We need to make more of our systems like that. The problem with
DNSSEC is it is so obviously architecturally correct but so
difficult to do deploy without many parties cooperating that it has
acted as an enormous tar baby.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread Perry E. Metzger
On Sat, 07 Sep 2013 20:14:10 -0700 Ray Dillinger b...@sonic.net
wrote:
 On 09/06/2013 05:58 PM, Jon Callas wrote:
 
  We know as a mathematical theorem that a block cipher with a back
  door *is* a public-key system. It is a very, very, very valuable
  thing, and suggests other mathematical secrets about hitherto
  unknown ways to make fast, secure public key systems.
 
 
 I've seen this assertion several times in this thread, but I cannot
 help thinking that it depends on what *kind* of backdoor you're
 talking about, because there are some cases in which as a crypto
 amateur I simply cannot see how the construction of an asymmetric
 cipher could be accomplished.
 
 As an example of a backdoor that doesn't obviously permit an
 asymmetric-cipher construction, consider a broken cipher that
 has 128-bit symmetric keys; but one of these keys (which one
 depends on an IV in some non-obvious way that's known to the
 attacker) can be used to decrypt any message regardless of the
 key used to encrypt it.

That key would then be known as the private key. The public key
is the set of magic values used in the symmetric cipher (say in the
one way functions of the Feistel network if it were a Feistel cipher)
such that such a magic decryption key exists.

 However, it is not a valid encryption key; no matter what you
 encrypt with it you get the same ciphertext.

So? If you have an algorithm that creates such ciphers in such a way
that the magic key is hard to find, then you produce all that you want
and you have a very powerful primitive for constructing public key
systems. You don't have an obvious signature algorithm yet, but I'm
sure we can think of one with a touch of cleverness.

That said, your hypothetical seems much like imagine that you can
float by the power of your mind alone. The construction of such a
cipher with a single master key that operates just like any other key
seems nearly impossible, and that should be obvious.

A symmetric cipher encryption function is necessarily one-to-one and
onto from the set of N bit blocks to itself. After all, if two blocks
encrypt to the same block, you can't decrypt them, and one block
can't encrypt to two blocks. If every key produces the same function
from 2^N to 2^N, it will be rapidly obvious, so keys have to produce
quite different mappings.

Your magic key must then take any block of N bits and magically
produce the corresponding plaintext when any given ciphertext
might correspond to many, many different plaintexts depending
on the key. That's clearly not something you can do.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Perry E. Metzger
On Sun, 8 Sep 2013 15:55:52 -0400 Thor Lancelot Simon
t...@rek.tjls.com wrote:
 On Sun, Sep 08, 2013 at 03:22:32PM -0400, Perry E. Metzger wrote:
  
  Ah, now *this* is potentially interesting. Imagine if you have a
  crypto accelerator that generates its IVs by encrypting
  information about keys in use using a key an observer might have
  or could guess from a small search space.
  
  Hadn't even occurred to me since it seems way more blatant than
  the other sort of leaks I was thinking of, but of course the mere
  fact that it is blatant doesn't mean that it would never be
  tried...
 
 Well, I guess it depends what your definition of blatant is.
 Treating the crypto hardware as a black box, it would be freaking
 hard to detect, no?

Ah, but it only needs to be found once to destroy the reputation of a
company.

Inserting bugs into chips (say, random number generators that won't
work well in the face of fabrication processes that alter analog
characteristics of circuits slightly) results in a could be an
accident sort of mistake. Altering a chip to insert an encrypted
form of a key into the initialization vectors in use cannot be
explained away that way.

You may say but how would you find that?. However, I've worked
in recent years with people who decap chips, photograph the surface
and reconstruct the circuits on a pretty routine basis -- tearing
apart secure hardware for fun and profit is their specialty. Even
when this process destructively eliminates in-RAM programming,
usually weaknesses such as power glitching attacks are discovered by
the examination of the dead system on the autopsy table and can
then be used with live hardware.

Now that it has been revealed that the NSA has either found or
arranged for bugs in several chips, I would presume that some of
these people are gearing up for major teardowns. Not all
such teardowns will happen in the open community, of course -- I'd
expect that even now there are folks in government labs around the
world readying their samples, their probe stations and their etchant
baths. Hopefully the guys in the open community will let us know
what's bad before the other folks start exploiting our hardware
silently, as I suspect the NSA is not going to send out a warning.

 I also wonder -- again, not entirely my own idea, my whiteboard
 partner can speak up for himself if he wants to -- about whether
 we're going to make ourselves better or worse off by rushing to the
 safety of PFS ciphersuites, which, with their reliance on DH, in
 the absence of good RNGs may make it *easier* for the adversary to
 recover our eventual symmetric-cipher keys, rather than harder!

I'll repeat the same observation I've made a lot: Dorothy Denning's
description of the Clipper chip key insertion ceremony described the
keys as being generated deterministically using an iterated block
cipher. I can't find the reference, but I'm pretty sure that when she
was asked why, the rationale was that an iterated block cipher can be
audited, and a hardware randomness source cannot.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Eugen Leitl

Forwarded with permission.

So there *is* a BTNS implementation, after all. Albeit
only for OpenBSD -- but this means FreeBSD is next, and
Linux to follow.

- Forwarded message from Andreas Davour ko...@yahoo.com -

Date: Sun, 8 Sep 2013 09:10:44 -0700 (PDT)
From: Andreas Davour ko...@yahoo.com
To: Eugen Leitl eu...@leitl.org
Subject: [Cryptography] Opening Discussion: Speculation on BULLRUN
X-Mailer: YahooMailWebService/0.8.156.576
Reply-To: Andreas Davour ko...@yahoo.com

 Apropos IPsec, I've tried searching for any BTNS (opportunistic encryption 
 mode for
 IPsec) implementations, and even the authors of the RFC are not aware of any. 
 Obviously, having a working OE BTNS implementation in Linux/*BSD would be a 
 very valuable thing, as an added, transparent protection layer against 
 passive attacks. There are many IPsec old hands here, it is probably just a 
 few man-days
 worth of work. It should be even possible to raise some funding for such a 
 project. Any takers?


Hi. I saw this message in the archive, and have not figured out how to reply to 
that one. But I felt this knowledge needed to be spread. Maybe you can post it 
to the list?

My friend MC have in fact implemented BTNS! Check this out: 
http://hack.org/mc/projects/btns/

I think I can speak for him and say that he would love to have that 
implementation be known to the others on the list, and would love others to add 
to his work, so we can get real network security without those spooks spoiling 
things.


/andreas
--
My son has spoken the truth, and he has sacrificed more than either the 
president of the United States or Peter King have ever in their political 
careers or their American lives. So how they choose to characterize him really 
doesn't carry that much weight with me. -- Edward Snowden's Father

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread John Kelsey
On Sep 8, 2013, at 3:55 PM, Thor Lancelot Simon t...@rek.tjls.com wrote:
...
 I also wonder -- again, not entirely my own idea, my whiteboard partner
 can speak up for himself if he wants to -- about whether we're going
 to make ourselves better or worse off by rushing to the safety of
 PFS ciphersuites, which, with their reliance on DH, in the absence of
 good RNGs may make it *easier* for the adversary to recover our eventual
 symmetric-cipher keys, rather than harder!

I don't think you can do anything useful in crypto without some good source of 
random bits.  If there is a private key somewhere (say, used for signing, or 
the public DH key used alongside the ephemeral one), you can combine the hash 
of that private key into your PRNG state.  The result is that if your entropy 
source is bad, you get security to someone who doesn't compromise your private 
key in the future, and if your entropy source is good, you get security even 
against someone who compromises your private key in the future (that is, you 
get perfect forward secrecy).

 Thor

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] AES state of the art...

2013-09-08 Thread Perry E. Metzger
What's the current state of the art of attacks against AES? Is the
advice that AES-128 is (slightly) more secure than AES-256, at least
in theory, still current?

(I'm also curious as to whether anyone has ever proposed fixes to the
weaknesses in the key schedule...)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Viktor Dukhovni
On Sun, Sep 08, 2013 at 06:16:45PM -0400, John Kelsey wrote:

 I don't think you can do anything useful in crypto without some
 good source of random bits.  If there is a private key somewhere
 (say, used for signing, or the public DH key used alongside the
 ephemeral one), you can combine the hash of that private key into
 your PRNG state.  The result is that if your entropy source is bad,
 you get security to someone who doesn't compromise your private
 key in the future, and if your entropy source is good, you get
 security even against someone who compromises your private key in
 the future (that is, you get perfect forward secrecy).

Nice in theory of course, but in practice applications don't write
their own PRNGS.  They use whatever the SSL library provides, OpenSSL,
GnuTLS, ...  If we assume weak PRNGS in the toolkit (or crypto chip,
...) then EDH could be weaker than RSA key exchange (provided the
server's key is strong enough).

The other concern is that in practice many EDH servers offer 1024-bit
primes, even after upgrading the certificate strength to 2048-bits.

Knee-jerk reactions to very murky information may be counter-productive.
Until there are more specific details,  it is far from clear which is 
better:

- RSA key exchange with a 2048-bit modulus.

- EDH with (typically) 1024-bit per-site strong prime modulus

- EDH with RFC-5114 2048-bit modulus and 256-bit q subgroup.

- EECDH using secp256r1

Until there is credible information one way or the other, it may
be best to focus on things we already know make sense:

- keep up with end-point software security patches

- avoid already known weak crypto (RC4?)

- Make sure VM provisioning includes initial PRNG seeding.

- Save entropy across reboots.

- ...

Yes PFS addresses after the fact server private key compromise,
but there is some risk that we don't know which if any of the PFS
mechanisms to trust, and implementations are not always well
engineered (see my post about GnuTLS and interoperability).

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 7, 2013, at 8:16 PM, Marcus D. Leech mle...@ripnet.com wrote:

 But it's not entirely clear to me that it will help enough in the scenarios 
 under discussion.  If we assume that mostly what NSA are doing is acquiring a 
 site
RSA key (either through donation on the part of the site, or through 
 factoring or other means), then yes, absolutely, PFS will be a significant 
 roadblock.
If, however, they're getting session-key material (perhaps through 
 back-doored software, rather than explicit cooperation by the target 
 website), the
PFS does nothing to help us.  And indeed, that same class of compromised 
 site could just as well be leaking plaintext.  Although leaking session
keys is lower-profile.

I think we are growing closer to agreement, PFS does help, the question is how 
much in the face of cooperation. 

Let me suggest the following. 

With RSA, a single quiet donation by the site and it's done. The situation 
becomes totally passive and there is no possibility knowing what has been read. 
 The system administrator could even do this without the executives knowing. 

With PFS there is a significantly higher profile interaction with the site. 
Either the session keys need to be transmitted  in bulk, or the RNG cribbed. 
Both of these have a significantly higher profile,  higher possibility of 
detection and increased difficulty to execute properly. Certainly a more risky 
think for a cooperating site to do. 

PFS does improve the situation even if cooperation is suspect. IMHO it is just 
better cryptography. Why not? 

It's better. It's already in the suites. All we have to do is use it... 

I am honestly curious about the motivation not to choose more secure modes that 
are already in the suites?



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Tony Naggs
The Spiegel article perhaps contains a key to this capability:
In the internal documents, experts boast about successful access to
iPhone data in instances where the NSA is able to infiltrate the
computer a person uses to sync their iPhone.

I have not seen security measures such as requiring a password from
the connected computer to a phone in order to access data such as
contact lists, SMS history, ..

This is probably done simply in order to provide maximum convenience
to end users.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 6:09 PM, Perry E. Metzger wrote:
 Not very surprising given everything else, but I thought I would
 forward the link. It more or less contends that the NSA has exploits
 for all major smartphones, which should not be surprising

 http://www.spiegel.de/international/world/privacy-scandal-nsa-can-spy-on-smart-phone-data-a-920971.html
A remarkably poor article.  Just what does gain access to mean?  There are 
boxes sold to law enforcement (but never, of course, to the bad guys) that 
claim they can get access to any phone out there.  If it's unlocked, everything 
is there for the taking; if it's locked, *some* of it is hard to get to, but 
most isn't.  Same goes for Android.

The article mentions that if they can get access to a machine the iPhone syncs 
with, they can get into the iPhone.  Well golly gee.  There was an attack 
reported just in the last couple of weeks in which someone built an attack into 
a fake charger!  Grab a charge at a public charger, get infected for  your 
trouble.  Apple's fixed that in the next release by prompting the user for 
permission whenever an unfamiliar device asks for connection.  But if you're in 
the machine the user normally connects to, that won't help.  Nothing, really, 
will help.

Really, for the common phones out there, the NSA could easily learn how to do 
this stuff with a quick Google search - and maybe paying a couple of thousand 
bucks to some of the companies that do it for a living.

The article then goes on to say the NSA can get SMS texts.  No kidding - so can 
the local cops.  It's all unencrypted, and the Telco's are only too happy to 
cooperate with govmint' agencies.

The only real news in the whole business is that they claim to have gotten into 
Blackberry's mail system.  It's implied that they bought an employee with the 
access needed to weaken things for them.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Paper on Tor deanonymization: Users Get Routed

2013-09-08 Thread Perry E. Metzger
A new paper on the Tor network, entitled Users Get Routed:
Traffic Correlation on Tor by Realistic Adversaries.

  https://security.cs.georgetown.edu/~msherr/papers/users-get-routed.pdf

Quote to whet your appetite:

We present the first analysis of the popular Tor anonymity network
that indicates the security of typical users against reasonably
realistic adversaries in the Tor network or in the underlying
Internet. Our results show that Tor users are far more susceptible
to compromise than indicated by prior work.
[...]
Our analysis shows that 80% of all types of users may be de-
anonymized by a relatively moderate Tor-relay adversary within six
months. Our results also show that against a single AS adversary
roughly 100% of users in some common locations are deanonymized
within three months (95% in three months for a single IXP). Fur-
ther, we find that an adversary controlling two ASes instead of
one reduces the median time to the first client de-anonymization
by an order of magnitude: from over three months to only 1 day
for a typical web user; and from over three months to roughly
one month for a BitTorrent user. This clearly shows the dramatic
effect an adversary that controls multiple ASes can have on
security.

Disclaimer: one of the authors (Micah Sherr) is a doctoral brother.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Kent Borg

On 09/08/2013 06:16 PM, John Kelsey wrote:
I don't think you can do anything useful in crypto without some good 
source of random bits.


I don't see the big worry about how hard it is to generate random 
numbers unless:


 a) You need them super fast (because you are Google, trying to secure 
your very high-speed long lines), or


 b) You are some embedded device that is impoverished for both sources 
of entropy and non-volatile storage, and you need good random bits the 
moment you boot.


On everything in between, there are sources of entropy. Collect them, 
hash then together and use them to feed some good cryptography.  If you 
seem short of entropy, look for more in your hardware manual. Hash in 
any local unique information. Hash in everything you can find! (If the 
NSA knows every single bit you are hashing in, no harm, hash them in 
anyway, but...if the NSA has misunderestimated  any one of your 
bits...then you scored a bit! Repeat as necessary.)


I am thinking pure HW RNGs are more sinful than pure SW RNGs, because 
real world entropy is colored and hardware is the wrong place to fix 
that. So don't buy HW RNGs, buy HW entropy sources (or find them in your 
current HW) and feed them into a good hybrid RNG.


On a modern multi-GHz CPU the exact LSB of your highspeed system 
counters, when the interrupt hits your service routine, has uncertainty 
that is quite real once the you push the NSA a few centimeters from your 
CPU or SoC.  Just sit around until you have a few network packets and 
you can have some real entropy. Wait longer for more entropy.


In case you did notice, I am a fan of hybrid HW/SW RNGs.

-kb


P.S.  Entropy pools that are only saved on orderly shutdowns are risking 
crash-and-playback attacks. Save regularly, or something like that.


P.P.S. Don't try to estimate entropy, it is a fool's errand, get as much 
as you can (within reason) and feed it into some good cryptography.


P.P.P.S. Have an independent RNG? If it *is* independent, no harm in 
XORing it in.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Perry E. Metzger
On Sun, 08 Sep 2013 20:34:55 -0400 Kent Borg kentb...@borg.org
wrote:
 On 09/08/2013 06:16 PM, John Kelsey wrote:
  I don't think you can do anything useful in crypto without some
  good source of random bits.
 
 I don't see the big worry about how hard it is to generate random 
 numbers unless:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-08 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Sep 06, 2013 at 05:22:26PM -0700, John Gilmore wrote:
 Speaking as someone who followed the IPSEC IETF standards committee
 pretty closely, while leading a group that tried to implement it and
 make so usable that it would be used by default throughout the
 Internet, I noticed some things:
 ...

Speaking as one of the Security Area Directors at the time...

I have to disagree with your implication that the NSA intentionally
fouled the IPSEC working group. There were a lot of people working to
foul it up! I also don’t believe that the folks who participated,
including the folks from the NSA, were working to weaken the
standard. I suspect that the effort to interfere in standards started
later then the IPSEC work. If the NSA was attempting to thwart IETF
security standards, I would have expected to also see bad things in
the TLS working group and the PGP working group. There is no sign of
their interference there.

The real (or at least the first) problem with the IPSEC working group
was that we had a good and simple solution, Photuris. However the
document editor on the standard decided to claim it (Photuris) as his
intellectual property and that others couldn’t recommend changes
without his approval. This effectively made Photuris toxic in the
working group and we had to move on to other solutions. This is one of
the events that lead to the IETF’s “Note Well” document and clear
policy on the IP associated with contributions. Then there was the
ISAKMP (yes, an NSA proposal) vs. SKIP. As Security AD, I eventually
had to choose between those two standards because the working group
could not generate consensus. I believed strongly enough that we
needed an IPSEC solution so I decided to choose (as I promised the
working group I would do if they failed to!). I chose ISAKMP. I posted
a message with my rationale to the IPSEC mailing list, I’m sure it is
still in the archives. I believe that was in 1996 (I still have a copy
somewhere in my personal archives).

At no point was I contacted by the NSA or any agent of any government
in an attempt to influence my decision. Folks can choose to believe
this statement, or not.

IPSEC in general did not have significant traction on the Internet in
general. It eventually gained traction in an important niche, namely
VPNs, but that evolved later.

IPSEC isn’t useful unless all of the end-points that need to
communicate implement it. Implementations need to be in the OS (for
all practical purposes).  OS vendors at the time were not particularly
interested in encryption of network traffic.

The folks who were interested were the browser folks. They were very
interested in enabling e-commerce, and that required
encryption. However they wanted the encryption layer someplace where
they could be sure it existed. An encryption solution was not useful
to them if it couldn’t be relied upon to be there. If the OS the user
had didn’t have an IPSEC layer, they were sunk. So they needed their
own layer. Thus the Netscape guys did SSL, and Microsoft did PCT and
in the IETF we were able to get them to work together to create
TLS. This was a *big deal*. We shortly had one deployed interoperable
encryption standard usable on the web.

If I was the NSA and I wanted to foul up encryption on the Internet,
the TLS group is where the action was. Yet from where I sit, I didn’t
see any such interference.

If we believe the Edward Snowden documents, the NSA at some point
started to interfere with international standards relating to
encryption. But I don’t believe they were in this business in the
1990’s at the IETF.

-Jeff

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iD8DBQFSLSMV8CBzV/QUlSsRAigkAKCU6erw1U7FOt7A1QdItlGbFRfo+gCfeMg1
0Woyz0FyKqKYqS+gZFQWEf0=
=yWOw
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 3:08 PM, Perry E. Metzger pe...@piermont.com wrote:

 On Sun, 8 Sep 2013 08:40:38 -0400 Phillip Hallam-Baker
 hal...@gmail.com wrote:
  The Registrars are pure marketing operations. Other than GoDaddy
  which implemented DNSSEC because they are trying to sell the
  business and more tech looks kewl during due diligence, there is
  not a market demand for DNSSEC.

 Not to discuss this particular case, but I often see claims to the
 effect that there is no market demand for security.

 I'd like to note two things about such claims.

 1) Although I don't think P H-B is an NSA plant here, I do
 wonder about how often we've heard that in the last decade from
 someone trying to reduce security.


There is a market demand for security. But it is always item #3 on the list
of priorities and the top two get done.

I have sold seven figure crypto installations that have remained shelfware.

The moral is that we have to find other market reasons to use security. For
example simplifying administration of endpoints. I do not argue like some
do that there is no market for security so we should give up, I argue that
there is little market for something that only provides security and so to
sell security we have to attach it to something they want.




 2) I doubt that safety is, per se, anything the market demands from
 cars, food, houses, etc. When people buy such products, they don't
 spend much time asking so, this house, did you make sure it won't
 fall down while we're in it and kill my family? or this coffee mug,
 it doesn't leach arsenic into the coffee does it?


People buy guns despite statistics that show that they are orders of
magnitude more likely to be shot with the gun themselves rather than by an
attacker.


However, if you told consumers did you know that food manufacturer
 X does not test its food for deadly bacteria on the basis that ``there
 is no market demand for safety'', they would form a lynch mob.
 Consumers *presume* their smart phones will not leak their bank
 account data and the like given that there is a banking app for it,
 just as they *presume* that their toaster will not electrocute them.


Yes, but most cases the telco will only buy a fix after they have been
burned.

To sell DNSSEC we should provide a benefit to the people who need to do the
deployment. Problem is that the perceived benefit is to the people going to
the site which is different...


It is fixable, people just need to understand that the stuff does not sell
itself.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread james hughes


On Sep 8, 2013, at 1:47 PM, Jerry Leichter leich...@lrw.com wrote:

 On Sep 8, 2013, at 3:51 PM, Perry E. Metzger wrote:
 
 In summary, it would appear that the most viable solution is to make
 the end-to-end encryption endpoint a piece of hardware the user owns
 (say the oft mentioned $50 Raspberry Pi class machine on their home
 net) and let the user interact with it over an encrypted connection
 (say running a normal protocol like Jabber client to server
 protocol over TLS, or IMAP over TLS, or https: and a web client.)
 
 It is a compromise, but one that fits with the usage pattern almost
 everyone has gotten used to. It cannot be done with the existing
 cloud model, though -- the user needs to own the box or we can't
 simultaneously maintain current protocols (and thus current clients)
 and current usage patterns.

 I don't see how it's possible to make any real progress within the existing 
 cloud model, so I'm with you 100% here.  (I've said the same earlier.)

Could cloud computing be a red herring? Banks and phone companies all give up 
personal information to governments (Verizon?) and have been doing this long 
before and long after cloud computing was a fad. Transport encryption 
(regardless of its security) is no solution either. 

The fact is, to do business, education, health care, you need to share 
sensitive information. There is no technical solution to this problem. Shared 
data is shared data. This is arguably the same as the analogue gap between 
content protected media and your eyes and ears. Encryption is not a solution 
when the data needs to be shared with the other party in the clear. 

I knew a guy one that quipped link encryptors are iron pipes rats run 
through. 

If compromised end points are your threat model, cloud computing is not your 
problem. 

The only solution is the Ted Kazinski technology rejection principal (as long 
as you also kill your brother).



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread James A. Donald

On 2013-09-09 11:15 AM, Perry E. Metzger wrote:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.


Real world experience is that there is nothing to worry about /if you do 
it right/.  And that it is frequently not done right.


When you screw up AES or such, your test vectors fail, your unit test 
fails, so you fix it, whereas if you screw up entropy, everything 
appears to work fine.


It is hard, perhaps impossible, to have test suite that makes sure that 
your entropy collection works.


One can, however, have a test suite that ascertains that on any two runs 
of the program, most items collected for entropy are different except 
for those that are expected to be the same, and that on any run, any 
item collected for entropy does make a difference.


Does your unit test check your entropy collection?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread James A. Donald

On 2013-09-09 4:49 AM, Perry E. Metzger wrote:

Your magic key must then take any block of N bits and magically
produce the corresponding plaintext when any given ciphertext
might correspond to many, many different plaintexts depending
on the key. That's clearly not something you can do.


Suppose that the mappings from 2^N plaintexts to 2^N ciphertexts are not 
random, but rather orderly, so that given one element of the map, one 
can predict all the other elements of the map.


Suppose, for example the effect of encryption was to map a 128 bit block 
to a group, map the key to the group, add the key to the block, and map 
back.  To someone who knows the group and the mapping, merely a heavily 
obfuscated 128 bit Caesar cipher.


No magic key.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread James A. Donald

On 2013-09-09 6:08 AM, John Kelsey wrote:

a.  Things that just barely work, like standards groups, must in general be 
easier to sabotage in subtle ways than things that click along with great 
efficiency.  But they are also things that often fail with no help at all from 
anyone, so it's hard to tell.

b.  There really are tradeoffs between security and almost everything else.  If 
you start suspecting conspiracy every time someone is reluctant to make that 
tradeoff in the direction you prefer, you are going to spend your career 
suspecting everyone everywhere of being ant-security.  This is likely to be 
about as productive as going around suspecting everyone of being a secret 
communist or racist or something.

Poor analogy.

Everyone is a racist, and most people lie about it.

Everyone is a communist in the sense of being unduly influenced by 
Marxist ideas, and those few of us that know it have to make a conscious 
effort to see the world straight, to recollect that some of our supposed 
knowledge of the world has been contaminated by widespread falsehood.


The Climategate files revealed that official science /is/ in large part 
a big conspiracy against the truth.


And Snowden's files seem to indicate that all relevant groups are 
infiltrated by people hostile to security.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread Anne Lynn Wheeler

note when the router hughes references was 1st introduced in in IETF gateway 
committee meeting as VPN it caused lots of turmoil in the IPSEC camp as well as 
with the other router vendors. The other router vendors went into standards 
stall mode ... their problem was none of them had a product with processors 
capable of handling the crypto processing. A month after the IETF meeting one 
of the vendors announced what was supposedly an equivalent product ... but was 
actually their standard product (w/o crypto) packaged with hardware link 
encryptors (needed dedicated links instead of being able to tunnel thru the 
internet).

The IPSEC camp whined a lot but eventually settled for referring to it as 
lightweight IPSEC (possibly trying to imply it didn't have equivalent crypto).

As to DNSSEC ... the simple scenario is requiring domain owners to register a 
public key and then all future communication is digitally signed and 
authenticated with the onfile, registered public key (as a countermeasure to 
domain name take-over which affects the integrity of the domain name 
infrastructure and propogates to SSL CA vendors if they can't trust who the 
true owner is). Then the SSL CA vendors can also start requiring that SSL 
certificate requests also be digitally signed ... which can also be 
authenticated by retrieving the onfile public key (turning an expensive, 
error-prone and time-consuming identification process into a reliable and 
simple authentication process). The catch22 is once public keys can be 
retrieved in realtime ... others can start doing it also ... going a long way 
towards eliminating need for SSL certificates. Have an option piggy-back public 
key in the same response with the ip-address. Then do SSL-lite ... XTP had 
reliable communication minim
um 3-pack
et exchange ... compared to TCP requiring minimum 7-packet exchange.

In the key escrow meetings, I lobbied hard that divulging/sharing authentication keys was 
violation of fundamental security principles. Other parties at the key escrow meetings 
whined that people could cheat and use authentication keys for encryption. However, there 
was commercial no single point of failure business case for replicating keys 
used in encrypting data-at-rest corporate assets.

One might hypothesis that some of the current DNSSEC complexity is FUD ... 
unable to kill it ... make it as unusable as possible.

disclaimer: person responsible for original DNS worked at the science center in 
the early 70s when he was at MIT.

--
virtualization experience starting Jan1968, online at home since Mar1970
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 7:16 PM, james hughes wrote:
 Let me suggest the following. 
 
 With RSA, a single quiet donation by the site and it's done. The situation 
 becomes totally passive and there is no possibility knowing what has been
 read.  The system administrator could even do this without the executives 
 knowing.
An additional helper:  Re-keying.  Suppose you send out a new public key, 
signed with your old one, once a week.  Keep the chain of replacements posted 
publicly so that someone who hasn't connected to you in a while can confirm the 
entire sequence from the last public key he knew to the current one.  If 
someone sends you a message with an invalid key (whether it was ever actually 
valid or not - it makes no difference), you just send them an update.

An attacker *could* sent out a fake update with your signature, but that would 
be detected almost immediately.  So a one-time donation is now good for a 
week.  Sure, the leaker can keep leaking - but the cost is now considerably
greater, and ongoing.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help

2013-09-08 Thread Max Kington
This space is of particular interest to me.  I implemented just one of
these and published the protocol (rather than pimp my blog if anyone wants
to read up on the protocol description feel free to email me and I'll send
you a link).

The system itself was built around a fairly simple PKI which then allowed
people to build end-to-end channels.  You hit the nail on the head though,
control of the keys.  If you can game the PKI you can replace someone's
public key and execute a MITM attack.  The approach I took to this was that
the PKI publishes peoples public keys but then allows other users to verify
your public key.  A MITM attack is possible but as soon as your public key
is rotated this is detected and the client itself asks if you'd like to
verify if out of band (this was for mobile devices so it lends itself to
having other channels to check keys via, like phone your friend and ask
them).  The much more likely thing is where someone tries to do a MITM
attack for just a particular user but as the channels are tunnelled end to
end they need to essentially ask the PKI to publish two duff keys, i.e. one
in each direction, Alice's key as far as Bob is concerned and Bob's key as
far as alice is concerned..  In turn the two people who's traffic the
attacker is trying to obtain can in turn ask someone else to double check
their.  It means that you need to publish an entirely fake PKI directory to
just two users.  The idea was the alarm bells go off when it transpires
that every person you want to get a proxy verification of a public key via
has 'all of a sudden' changed their public key too.  It's a hybrid model, a
PKI to make life easy for the users to bootstrap but which uses a web of
trust to detect when the PKI (or your local directory) has been attacked.
 Relationships become 'public' knowledge at least in so far as you ask
others in your address book to verify peoples public keys (all be it via
uuids, you could still find out if your mate Bill had 'John's' public key
in his address book because he's asked you to verify it for him).  So for
those who want to protect the conversational meta data it's already
orthogonal to that.

Group chat semantics are quite feasible in that all users are peers but you
run into difficulty when it comes to signing your own messages, not that
you can't sign them but that's computationally expensive and the eats
battery life.  Again, you are right though, what do you want to achieve?

I certainly built a protocol that answered the main questions I was asking!

As for multiple devices, the trick was always usability.  How do you
securely move an identity token of some description from one node to
another.  I settled on every device having its own key pair but you still
need an 'owning' identity and a way to 'enrol' a new key pair because if
that got broken the attacked just enrols their own 'device'
surreptitiously.  You then get into the realms of passwords through salted
hashing algorithms but then you're back to the security of a password being
brute forced.  If you were really paranoid I proposed a smart card
mechanism but I've yet to implement that (how closed a world are smart
cards with decent protection specifications?! but that's another
conversation), the idea being that you decrypt your device key pair using
the smart card and ditch the smart card if needs be, through a typical
office shredder.

Silent Circle was one of the most analogous systems but I'm an amateur
compared to those chaps.  As interesting as it was building, it kept
boiling down to one thing: Assuming I'd done a good job all I had done was
shift the target from the protocol to the device.

If I really wanted to get the data I'd attack the onscreen software
keyboard and leave everything else alone.

Max


On Sun, Sep 8, 2013 at 7:50 PM, Jerry Leichter leich...@lrw.com wrote:

 On Sep 7, 2013, at 11:16 PM, Marcus D. Leech wrote:
  Jeff Schiller pointed out a little while ago that the crypto-engineering
 community have largely failed to make end-to-end encryption easy to use.
  There are reasons for that, some technical, some political, but it is
 absolutely true that end-to-end encryption, for those cases where end to
 end is the obvious and natural model, has not significantly materialized
 on the Internet.  Relatively speaking, a handful of crypto-nerds use
 end-to-end schemes for e-mail and chat clients, and so on, but the vast
 majority of the Internet user-space?  Not so much.
 I agree, but the situation is complicated.  Consider chat.  If it's
 one-to-one, end-to-end encryption is pretty simple and could be made simple
 to use; but people also want to chat rooms, which are a much more
 complicated key management problem - unless you let the server do the
 encryption.  Do you enable it only for one-to-one conversations?  Provide
 different interfaces for one-to-one and chat room discussions?

 Even for one-to-one discussions, these days, people want transparent
 movement across their hardware. 

Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Kent Borg

On 09/08/2013 09:15 PM, Perry E. Metzger wrote:
Perhaps you don't see the big worry, but real world experience says it 
is something everyone else should worry about anyway.


I overstated it.

Good random numbers are crucial, and like any cryptography, exact 
details matter.  Programmers are constantly making embarrassing 
mistakes.  (The recent Android RNG bug, was that Sun, Oracle, or Google?)


But there is no special reason to worry about corrupted HW RNGs because 
one should not be using them as-is, there are better ways to get good 
random data, ways not obvious to a naive civilian, but still well known.


Snowden reassured us when he said that good cryptography is still good 
cryptography.  If that includes both hashes and cyphers, then the 
fundamental components of sensible hybrid RNGs are sound.


Much more worrisome is whether Manchurian Circuits have been added to 
any hardware, no matter its admitted purpose, just waiting to be activated.


-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Jerry Leichter
Apparently this was just a teaser article.  The following is apparently the 
full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't tell for 
sure - it's the German original, and my German is non-existent.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-08 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/7/13 9:06 PM, Christian Huitema wrote:
 Pairwise shared secrets are just about the only thing that
 scales worse than public key distribution by way of PGP key
 fingerprints on business cards.   The equivalent of CAs in an
 all-symmetric world is KDCs.  Instead of having the power to
 enable an active attack on you today, KDCs have the power to
 enable a passive attack on you forever.  If we want secure crypto
 that can be used by everyone, with minimal trust, public key is
 the only way to do it.
 
 
 I am certainly not going to advocate Internet-scale KDC. But what
 if the application does not need to scale more than a network of 
 friends?

A thousand times yes.

One doesn't need to communicate with several billion people, and we
don't need systems that scale up that high. Most folks just want to
interact (chat, share photos, voice/video conference, etc.) with their
friends and family and colleagues -- maybe 50 - 500 people. IMHO we
only need to scale up that high for secure communication. (I'm talking
about individual communication, not enterprise stuff.)

What about talking with someone new? Well, we can design separate
protocols that enable you to be introduced to someone you haven't
communicated with before (we already do that with things like FOAF,
LinkedIn, Facebook). Part of that introduction might involve learning
the new person's public key from someone you already trust (no need
for Internet-scale certificate authorities). You could use that public
key for bootstrapping the pairwise shared secrets.

Another attractive aspect of a network of friends is that it can be
used for mix networking (route messages through your friends) and for
things like less-than-completely-public media relays and data proxies
for voice, video, file transfer, etc. And such relays might just live
on those little home devices that Perry is talking about, separate
from the cloud.

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSLQDNAAoJEOoGpJErxa2phHAQAJ76DfrFmz6Sv+HkczOgxJA1
v0kqmLphDhzgT/9eUiF1cCkowF0HE1l84DTuMefrwT2DmOLZJVQANy0Tg/CzWLRu
3JBDkPRQ/cdlfDyy1ZHNb4bsGWyxHIXViQg2sNQZ9KB8yRF4pouYewXOpoJDIabN
G40mVlWzuO5cTUWLColwDCaoR20Q+04Ln19BAiJi58d2UT4c55ZyF45hbbQSYL7T
bl1JQkvZdtp2Syn4DaGS+WmCUIGsv5KpdXmZv0ljKXoRqsOW7GjaiaQz84MMMQg9
EHZIDnAetTXdfbEki8AsO5PlGRmi944tHL7DtvXJKd76CY5dIZ6kywMU2g+/LrIn
1uWwTSogu4n4yiQrLyYfOnsttkzJWC9BE9YJXXeH0IN6VRvkC710zphCZLVw6LZJ
TsNvtskigIQ9jnPO1le1zkHIagXHhns6fVTURFuWd9ZHCOOdbNT7h6Lj+I8OGCkp
KFAbRfXzAQDZgVrl42IZ8Sn4DioCLGbscP3maU/C8J3s1+ega3lxfX3DNbJpX+id
FtnaXHfushv9xIkoNT/sBJrg79BblU5ZOH/GUBMwV+rFlWA0ofvIrhkaSnRUPFTI
gq2C913YWQfyybolHKRNsZ/JpYjarZAJ5eJdW9ALo3xrCxlTr/EcIek7hCVKBK1o
d7FvIpkYoexTO08AKfcZ
=GRXj
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Suite B after today's news

2013-09-08 Thread Peter Gutmann
Ralph Holz ralph-cryptometz...@ralphholz.de writes:

I've followed that list for a while. What I find weird is that there should
be much dissent at all. This is about increasing security based on adding
quite well-understood mechanisms. What's to be so opposed to there?

There wasn't really much dissent (there was some discussion, both on and off-
list, which I've tried to address in updates of the draft), it's just that the
WG chairs don't seem to want to move on it.

Does adding some ciphersuites really require an extension, maybe even on the
Standards Track? I shouldn't think so, looking at the RFCs that already do
this, e.g. RFC 5289 for AES-GCM. Just go for an Informational. FWIW, even
HTTPS is Informational.

I've heard from implementers at Large Organisations that having it non-
standards-track makes it hard to get it adopted there.  I guess I could go for
Informational if all else fails.

I don't think it hurts to let users and operators vote with their feet here.

That's what's already happened/happening, problem is that without an RFC to
nail down at least the extension ID it's a bit hard for commercial vendors to
commit to it.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-08 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/8/13 1:51 PM, Perry E. Metzger wrote:
 On Sun, 8 Sep 2013 14:50:07 -0400 Jerry Leichter
 leich...@lrw.com wrote:
 Even for one-to-one discussions, these days, people want 
 transparent movement across their hardware.  If I'm in a chat 
 session on my laptop and leave the house, I'd like to be able to 
 continue on my phone.  How do I hand off the conversation - and
 the keys?
 
 I wrote about this a couple of weeks ago, see:
 
 http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html

  In summary, it would appear that the most viable solution is to
 make the end-to-end encryption endpoint a piece of hardware the
 user owns (say the oft mentioned $50 Raspberry Pi class machine on
 their home net) and let the user interact with it over an encrypted
 connection (say running a normal protocol like Jabber client to
 server protocol over TLS, or IMAP over TLS, or https: and a web
 client.)

Yes, that is a possibility. Personally I'm still mulling over whether
we'd want your little home device to be a Jabber server (typically
requiring a stable IP address or an FQDN), a standard Jabber client
connected to some other server (which might be a personal server at
your VPS or a small-scale server for friends and family), or something
outside of XMPP entirely that merely advertises its reachability via
some other protocol over Jabber (in its vCard or presence information).

 It is a compromise, but one that fits with the usage pattern
 almost everyone has gotten used to. It cannot be done with the
 existing cloud model, though -- the user needs to own the box or we
 can't simultaneously maintain current protocols (and thus current
 clients) and current usage patterns.

I very much agree.

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSLQUgAAoJEOoGpJErxa2p9NUP/3R2p37pupeFB3GV5NJt1sN9
kOO+P9TXO8Ra3WXeQcNcwe43tVfpKlJIbHa9tZbs5Mvl6F2TSqChTxZ2ftS178Ul
QAhX3SuztbPr7LUjROmmwLBVHr9k06LMVjSM4B5XFk3uGV+5IrTfpRkBLH7UB7vh
9mA21Zu/tGjUNPZBbHJIqXHhHMFTS4ewUznEwr4vT87xVkcG2yJ385IF/6Q22a1u
n6hWuLPcWwABROIXRhZ/wDafEKnchUGiAICiGpAjd6Ngrc3gzvsOGPjcIdFS9sO8
SWO1W+AJQi6HlcnMrmlmlRJL/pBkQbOvV97/VozOKmwdP7a6LZ+OcRkpyy4HrV2C
5KBvYrl66G/G6WaWF9juRbjSvQLhpJ6CkSJ0vwfttCfI2oTmAGo/+d/L1V6Pdmv5
RYWoON6wyHTOTmvmewEcjHGzTKgae+u4BcbzZND1vpaoN4Wo5eXWQ5NkAUzK1INY
NIz4kORhnHsGOfy8SCKV7WO6JQHFzFc7hZMZ8y0VkfozVK1N0IJRxPblWynI/wo6
xy3WtCWvAmCmDL0fm0SdVC3K85hJFD2kbPQWoqyKPq700PjE4/WJyL4/0Eu2cYa5
m9rB/vM5Cdkrv9LEJtZjQ7Ro0flV21P+rr2iZXVSXPVbzuj4K4oRGihcXwD9E/B7
+o+v/Ckzamfi1fpawnDk
=ICV8
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread Jerry Leichter
On Sep 8, 2013, at 9:15 PM, Perry E. Metzger wrote:
 I don't see the big worry about how hard it is to generate random 
 numbers unless:
 
 Lenstra, Heninger and others have both shown mass breaks of keys based
 on random number generator flaws in the field. Random number
 generators have been the source of a huge number of breaks over time.
 
 Perhaps you don't see the big worry, but real world experience says
 it is something everyone else should worry about anyway.
Which brings into the light the question:  Just *why* have so many random 
number generators proved to be so weak.  If we knew the past trouble spots, we 
could try to avoid them, or at least pay special care to them during reviews, 
in the future.

I'm going entirely off of memory here and a better, more data-driven approach, 
might be worth doing, but I can think of three broad classes of root causes of 
past breaks:

1.  The designers just plain didn't understand the problem and used some 
obvious - and, in retrospect, obviously wrong - technique.  (For example, they 
didn't understand the concept of entropy and simply fed a low-entropy source 
into a whitener of some kind - often MD5 or SHA-1.  The result can *look* 
impressively random, but is cryptographically worthless.)

2.  The entropy available from the sources used was much less, at least in some 
circumstances (e.g., at startup) than the designers assumed.

3.  The code used in good random sources can look strange to programmers not 
familiar with it, and may even look buggy.  Sometimes good generators get 
ruined by later programmers who clean up the code.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Der Spiegel: NSA Can Spy on Smart Phone Data

2013-09-08 Thread Christian Huitema
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 Apparently this was just a teaser article.  The following is apparently the 
 full story:  http://cryptome.org/2013/09/nsa-smartphones.pdf  I can't tell  
 for sure - it's the German original, and my German is non-existent.

The high level summary is that phones contain a great deal of interesting 
information, that they can target IPhone and Android phone, and that after some 
pretty long efforts they can hack the Blackberry too. Bottom line, get a 
Windows Phone...

- -- Christian Huitema
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (MingW32)
Comment: Using gpg4o v3.1.107.3564 - http://www.gpg4o.de/
Charset: utf-8

iQEcBAEBAgAGBQJSLUz0AAoJELba05IUOHVQTvUH/2XXo92DcMKpWUQ/8q4dg8BY
4B+/ytLy8tpBH33lT+u1yTpnLH/OV0h6mQdIusMun94JugGlJiePe0yC6zcsEE+s
OgU1SNdvqRoc5whTiV6ZIMfoOakyzeLPonS+gZ6hOWBLjQf52JNVHE4ERWTOK5un
iymLK36wTFqHceF6+iVrJEwaYEvLURpUB2U3dghC5OJyQzf5yqCvdYP18iStz2WT
woSJikGps2dS7eV6vPtkqhar5EWXHpPPAYwZbDskuMx10Y8Z8ET+HTFAw5rV3d3L
925adBWQLjR73wpANRyH85LtsK6nJlJzW0D1IMBmFyOqKZsOxjZQ75dAyi4oE+o=
=/S/b
-END PGP SIGNATURE-

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography