Re: [Cryptography] Killing two IV related birds with one stone

2013-09-11 Thread Perry E. Metzger
On Wed, 11 Sep 2013 20:01:28 -0400 Jerry Leichter 
wrote:
> > ...Note that if you still transmit the IVs, a misimplemented
> > client could still interoperate with a malicious counterparty
> > that did not use the enforced method for IV calculation. If you
> > don't transmit the IVs at all but calculate them, the system will
> > not interoperate if the implicit IVs aren't calculated the same
> > way by both sides, thus ensuring that the covert channel is
> > closed.

> Ah, but where did the session and IV-generating keys come from?
> The same random generator you now don't trust to directly give you
> an IV?

Certainly, but if you remove most or all covert channels, you've
narrowed the problem down to auditing the RNG instead of having to
audit much more of the system. It is all a question of small steps
towards better assurance. No one measure will fix everything.

-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Killing two IV related birds with one stone

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 6:51 PM, Perry E. Metzger wrote:
> It occurs to me that specifying IVs for CBC mode in protocols
> like IPsec, TLS, etc. be generated by using a block cipher in counter
> mode and that the IVs be implicit rather than transmitted kills two
> birds with one stone.
Of course, now you're going to need to agree on two keys - one for the main 
cipher, one of the IV-generating cipher.  Seems like a great deal of trouble to 
go to to rescue a mode with few advantages.  (Perry and I exchanged some 
private mail on this subject.  He claims CBC has an advantage over CTR because 
CTR allows you to deterministically modify the plaintext "under" the 
encryption.  I used to favor CBC for that reason as well, though in fact you 
can modify the text anyway by replaying a previous block - it's just harder to 
control.  I've become convinced, though, the CBC without authentication is way 
too insecure to use.  Once you require authentication, CBC has no advantages I 
can see over CTR.)

But if you insist on CBC ... it's not clear to me whether the attack in 
Rogoway's paper goes through once authentication is added.  If it doesn't, E(0) 
does just fine (and of course doesn't have to be transmitted).

> ...Note that if you still transmit the IVs, a misimplemented client
> could still interoperate with a malicious counterparty that did not
> use the enforced method for IV calculation. If you don't transmit
> the IVs at all but calculate them, the system will not interoperate if
> the implicit IVs aren't calculated the same way by both sides, thus
> ensuring that the covert channel is closed.
Ah, but where did the session and IV-generating keys come from?  The same 
random generator you now don't trust to directly give you an IV?

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Nemo
Jerry Leichter  writes:

> The real problem is that "unpredictable" has no definition.

Rogaway provides the definition in the paragraph we are discussing...

> Rogoway specifically says that if what you mean by "unpredictable" is
> "random but biased" (very informally), then you lose some security in
> proportion to the degree of bias: "A quantitative statement of such
> results would 'give up' in the ind$ advantage an amount proportional
> to the e(q, t) value defined above."

That "e(q,t) value defined above" is the probability that the attacker
can predict the IV after q samples given time t. That appears to be a
very precise definition of "predictability", and the smaller it gets,
the closer you get to random-IV security.

But enough of this particular rat hole.

> I actually have no problem with your rephrased statement.  My concern
> was the apparently flippant dismissal of all "academic" work as
> "assuming a can opener".

Fair enough; I apologize for my flippancy. Of course the assumption of a
"strong block cipher" is justified by massive amounts of painstaking
effort expended in attempts to crack them.

Nonetheless, I think it would be wise to build in additional margin
anywhere we can get it cheaply.

> Do I wish we had a way to prove something secure without assumptions
> beyond basic mathematics?  Absolutely; everyone would love to see
> that.  But we have no idea how to do it.

I doubt we will have provable complexity lower bounds for useful
cryptographic algorithms until well after P vs. NP is resolved.  That
is, not soon.

Until then, provable security is purely about reductions. There is
nothing wrong with that. And as I said before, I believe we should worry
greatly about theoretical attacks that invalidate those reductions,
regardless of how "purely academic" they may seem to an engineer.

> On the matter of a secret IV: It can't actually help much.  Any suffix
> of a CBC encryption (treated as a sequence of blocks, not bytes) is
> itself a valid CBC encryption.

Yes, obviously... which is why I wrote "I am particularly thinking of
CTR mode and its relatives".

It's a pity OCB mode is patented.

 - Nemo
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Radioactive random numbers

2013-09-11 Thread Perry E. Metzger
On Thu, 12 Sep 2013 08:47:16 +1000 (EST) Dave Horsfall
 wrote:
> Another whacky idea...
> 
> Given that there is One True Source of randomness to wit
> radioactive emission, has anyone considered playing with old smoke
> detectors?

People have experimented with all sorts of stuff, and you can make
any of hundreds of methods from cameras+lava lamp+hash function to
sound cards to radioactive sources work if you have budget and time.

The issue is not finding ways to generate entropy. The issue is that
you need something that's cheap and ubiquitous.

User endpoints like cell phones have users to help them generate
entropy, but the world's routers, servers, etc. do not have good
sources, especially at first boot time, and for customer NAT boxes and
the like the price points are vicious.

The attraction of methods that use nothing but a handful of
transistors is that they can be fabricated on chip and thus have
nearly zero marginal cost. The huge disadvantage is that if your
opponent can convince chip manufacturers to introduce small changes
into their design, you're in trouble.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Summary of the discussion so far

2013-09-11 Thread Nemo
Phillip Hallam-Baker  writes:

> I have attempted to produce a summary of the discussion so far for use
> as a requirements document for the PRISM-PROOF email scheme. This is
> now available as an Internet draft.
>
> http://www.ietf.org/id/draft-hallambaker-prismproof-req-00.txt

First, I suggest removing all remotely political commentary and sticking
to technical facts.  Phrases like "questionable constitutional validity"
have no place in an Internet draft and harm the document, in my opinion.

Second, your section on Perfect Forward Secrecy ignores the purpose of
PFS, which has nothing to do with defense against cryptanalytic attacks.
The purpose of PFS is this: Should an attacker compel you to disclose
your private key, or should they compromise or confiscate the system
where your private key is stored, they could then decrypt all of your
earlier communications...  unless you used PFS.

 - Nemo
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Radioactive random numbers

2013-09-11 Thread Dave Horsfall
Another whacky idea...

Given that there is One True Source of randomness to wit radioactive 
emission, has anyone considered playing with old smoke detectors?

The ionising types are being phased out in favour of optical (at least in 
Australia) so there must be heaps of them lying around.

I know - legislative requirements, HAZMAT etc, but it ought to make for a 
good thought experiment.

-- Dave
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Killing two IV related birds with one stone

2013-09-11 Thread Perry E. Metzger
It occurs to me that specifying IVs for CBC mode in protocols
like IPsec, TLS, etc. be generated by using a block cipher in counter
mode and that the IVs be implicit rather than transmitted kills two
birds with one stone.

The first bird is the obvious one: we now know IVs are unpredictable
and will not repeat.

The second bird is less obvious: we've just gotten rid of a covert
channel for malicious hardware to leak information.

Note that if you still transmit the IVs, a misimplemented client
could still interoperate with a malicious counterparty that did not
use the enforced method for IV calculation. If you don't transmit
the IVs at all but calculate them, the system will not interoperate if
the implicit IVs aren't calculated the same way by both sides, thus
ensuring that the covert channel is closed.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 1:22 PM, Perry E. Metzger  wrote:
>> Let us consider that source of colored noise with which we are most 
>> familiar:  The human voice.  Efforts to realistically simulate a
>> human voice have not been very successful.  The most successful
>> approach has been the ransom note approach
> I don't think this is true
It isn't.  See 
http://www.kth.se/en/csc/forskning/small-visionary-projects/tidigare-svp/fa-en-konstgjord-rost-att-lata-som-en-riktig-manniska-1.379755

On the underlying issue of whether a software model of a hardware RNG could be 
accurate enough for ... some not-quite-specified purpose:  Gate-level 
simulation of circuits is a simple off-the-shelf technology.  If the randomness 
is coming from below that, you need more accurate simulations, but *no one* 
builds a chip these days without building a detailed physical model running in 
a simulator first.  The cost of getting it wrong would be way too large.  Some 
levels of the simulation use public information; at some depth, you probably 
get into process details that would be closely held.

Since it's not clear exactly how you would use this detailed model to, say, 
audit a real hardware generator, it's not clear just how detailed a model you 
would need.
  
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Nemo
Jerry Leichter  writes:

> The older literature requires that the IV be "unpredictable" (an
> ill-defined term), but in fact if you want any kind of security proofs
> for CBC, it must actually be random.

Wrong, according to the Rogaway paper you cited.  Pull up
http://www.cs.ucdavis.edu/~rogaway/papers/modes.pdf and read the last
paragraph of section I.6 (pages 20-21).  Excerpt:

We concur, without trying to formally show theorems, that all of the
SP 800-38A modes that are secure as probabilistic encryption schemes
-- namely, CBC, CFB, and OFB -- will remain secure if the IV is not
perfectly random, but only unguessable.

Thank you for the reference, by the way; it is an excellent paper.

>> Back to CBC mode and secret IVs. I do not think we will too find much
>> guidance from the academic side on this, because they tend to "assume
>> a can opener"... Er, I mean a "secure block cipher"... And given that
>> assumption, all of the usual modes are provably secure with cleartext
>> IVs.

> Incorrect on multiple levels.  See the paper I mentioned in my
> response to Perry.

If you are going to call me wrong in a public forum, please have the
courtesy to be specific. My statement was, in fact, correct in every
detail.

To rephrase:

Security proofs for block cipher modes never depend on keeping the IV
confidential from the attacker. Standard practice (e.g. TLS, SSH) is to
send it in the clear, and this is fine as far as "provable security" is
concerned.

Rogaway's paper does point out, among other things, that naive handling
of the IV can break the security proofs; e.g., for the scheme you
described earlier in this thread and incorrectly attributed to Rogaway.

My point is that if the IV can be kept confidential cheaply, why not? (I
am particularly thinking of CTR mode and its relatives.)

 - Nemo
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Squaring Zooko's triangle

2013-09-11 Thread Guido Witmond
On 09/11/13 13:23, Paul Crowley wrote:
> From the title it sounds like you're talking about my 2007 proposal:
> 
> http://www.lshift.net/blog/2007/11/10/squaring-zookos-triangle
> http://www.lshift.net/blog/2007/11/21/squaring-zookos-triangle-part-two
> 
> This uses key stretching to increase the work of generating a colliding
> identifier from 2^64 to 2^88 steps.

Hi Paul,

Reading your blog, you've came up with a way to encode a public key into
a much more memorable string of words. Although the user is not free to
choose the name. I go a bit furhter in that direction.

In Eccentric Authentication, the usernames (nicknames) are composed of a
domain name and an account name. Just like email addresses. The domain
name is given by the site, the account name is your choice. As long as
it is unique for the site. (There can be a foo at google, a foo at
gmail, a foo at yahoo). Just as people expect email addresses to be
unique too.

To create a full name, the user chooses a site and opens an account
there. The account name is free to choose by the user (subject to
availabilty and site rules). If the requested account name is not yet
given, the sites' local CA signs the name (and the users' public key)
into a client certificate.

You can use this certificate to log in at the site, but also to encrypt
and sign messages.

To make names Zooko-proof, you need to make sure that once a name is
given (bound to a value), it cannot be changed anymore.

For that I use a form of Certificate Registry for logging. Once you've
acquired a client certificate, you send it to the registry. It stores
the certificate keyed by it's full name. Ie, anyone can lookup the name
at the registry and retrieve your certificate.

This registry protects against man in the middle attacks. When you
encounter a signed message somewhere, you lookup the certificate in the
registry. You should expect a single answer, namely, the certificate
that matches the signature on the message.

If you receive the matching certificate, it is proof that the full name
is unique and that the public key in the certificate can be used.

If you receive a single answer with a different certificate, you know
that someone is trying a mitm between you and the other party.
You submit the one that you've discovered to the registry so it will be
there for everyone to see.

If there are multiple certificates (bearing that same full name) signed
by the same CA, it's them who became dishonest. The protocol explicitly
calls the site Dishonest.

If there are multiple entries bearing the same name but from different
CA's, there has been a DNSSEC registry hack. The site should change
DNSSEC-registrar. And the key is useless.

In general, every once in a while you check that your name is still
unique, just to make sure that the site keeps its requirement to hand
out each name only once.

You also check out the names of new communication partners, just before
and slightly longer after first contact. When you still only find your
and their names with the expected nickname, there has been no mitm and
you have validated that persons public key. (As described in my blog
"The Holy Grail of Cryptography" [0]. You can keep using this persons
public key, even if the site gets compromised later. Just add it in your
address book.


I hope it has become clear how I square the triangle. Feel free to point
out omissions, request clarifications.

With kind regards, Guido Witmond

0: http://eccentric-authentication.org




signature.asc
Description: OpenPGP digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 1:53 AM, zooko  wrote:
> DJB's Ed25519 takes [using message context as part of random number 
> generation one step further, and makes the nonce determined *solely* by the 
> message and the secret key, avoiding the PRNG part altogether:
This is not *necessarily* safe.  In another thread, we discussed whether 
choosing the IV for CBC mode by encrypting 0 with the session key was 
sufficient to meet the randomness requirements.  It turns out it does not.  I 
won't repeat the link to Rogoway's paper on the subject, where he shows that 
using this technique is strictly weaker than using a true random IV.

That doesn't mean the way it's done in Ed25519 is unsafe, just that you cannot 
generically assume that computing a random value from existing private 
information is safe.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 5:57 PM, Nemo  wrote:
>> The older literature requires that the IV be "unpredictable" (an
>> ill-defined term), but in fact if you want any kind of security proofs
>> for CBC, it must actually be random.
> 
> Wrong, according to the Rogaway paper you cited.  Pull up
> http://www.cs.ucdavis.edu/~rogaway/papers/modes.pdf and read the last
> paragraph of section I.6 (pages 20-21).  Excerpt:
> 
>We concur, without trying to formally show theorems, that all of the
>SP 800-38A modes that are secure as probabilistic encryption schemes
>-- namely, CBC, CFB, and OFB -- will remain secure if the IV is not
>perfectly random, but only unguessable.
The real problem is that "unpredictable" has no definition.  E(0) with the 
session key is "unpredictable" to an attacker, but as the paper shows, it 
cannot safely be used for the IV.  Rogoway specifically says that if what you 
mean by "unpredictable" is "random but biased" (very informally), then you lose 
some security in proportion to the degree of bias:  "A quantitative statement 
of such results would “give up” in the ind$ advantage an amount proportional to 
the ε(q, t) value defined above."

>>> I do not think we will too find much guidance from the academic side on 
>>> [secret IV's], because they tend to "assume a can opener"... Er, I mean a 
>>> "secure block cipher"... And given that assumption, all of the usual modes 
>>> are provably secure with cleartext IVs.
> 
>> Incorrect on multiple levels.  See the paper I mentioned in my
>> response to Perry.
> 
> If you are going to call me wrong in a public forum, please have the
> courtesy to be specific. My statement was, in fact, correct in every
> detail.
> 
> To rephrase:
I actually have no problem with your rephrased statement.  My concern was the 
apparently flippant dismissal of all "academic" work as "assuming a can 
opener".  Yes, there's some like that.  There's also some that shows how given 
weaker assumptions you can create a provably secure block cipher (though in 
practice it's not clear to me that any real block cipher is really created that 
way).  Beyond that, "provably secure" is slippery - there are many, many 
notions of security.  Rogoway's paper gives a particular definition for 
"secure" and does indeed show that if you have a random IV, CBC attains it.  
But he also points out that that's a very weak definition of "secure" - but 
without authentication, you can't get any more.

Do I wish we had a way to prove something secure without assumptions beyond 
basic mathematics?  Absolutely; everyone would love to see that.  But we have 
no idea how to do it.  All we can do is follow the traditional path of 
mathematics and (a) make the assumptions as clear, simple, limited, and 
"obvious" as possible; (b) show what happens as the assumptions are relaxed or, 
sometimes, strengthened.  That's what you find in the good cryptographic work.  
(BTW, if you think I'm defending my own work here - far from it.  I left 
academia and theoretical work behind a very long time ago - I've been a 
nuts-and-bolts systems guy for decades.)

On the matter of a secret IV:  It can't actually help much.  Any suffix of a 
CBC encryption (treated as a sequence of blocks, not bytes) is itself a valid 
CBC encryption.  Considered on its own, it has a secret IV; considered in the 
context of the immediately preceding block, it has a non-secret IV.  So a 
secret IV *at most* protects the very first block of the message.  I doubt 
anyone has tried to formalized just how much it might help simply because it's 
so small. 

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread Adam Langley
On Wed, Sep 11, 2013 at 12:43 PM, William Allen Simpson
 wrote:
> Thanks, this part I knew, although it would be good explanatory text to
> add to the draft.

Done.

> My old formulation from CBCS was developed during the old IPsec
> discussions.  It's just simpler and faster to xor the per-packet counter
> with the MAC-key than using the ChaCha cipher itself to generate
> per-packet key expansion.

XORing a per-session secret with the sequence number would not be
sufficient for Poly1305. The mask part (the final 16 bytes), at least,
needs to be uniformly distributed. Having different values be related
would be very bad. Off the cuff I'm not sure whether the evaluation
point also has the same issue, but it's not something that I'd like to
find out.

> Anyway, good explanation!  Please add it to the draft.

Done.

> No, we should design with
> the expectation that there's something wrong with every cipher (and
> every implementation), and strengthen it as best we know how.

Keep in mind that something similar to this line of thinking has been
very costly in the past:

* It held back the use of counter modes (because the input to the
cipher was mostly zeros) and encouraged the use of CBC mode instead.
* It encouraged MAC-then-Encrypt because the encryption could help
"protect" the MAC.

Both cases were rather a mistake! (The latter certainly, and I dislike
CBC mode so I'm lumping it in there too.)

This ChaCha case is very similar to running a block cipher in counter
mode, something that's solidly established now. It's also exactly as
was intended in the Salsa/ChaCha design. If ChaCha has insufficient
diffusion to cope with it then ChaCha is bust and needs to be
replaced.

I know we differ on the meaning of "conservative" in this case, but
I'm pretty comfortable with my spin on it by using ChaCha as designed,
rather than missing something important when trying for a more complex
design.


Cheers

AGL
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] History and implementation status of Opportunistic Encryption for IPsec

2013-09-11 Thread Paul Wouters


History and implementation status of Opportunistic Encryption for IPsec


NOTE:   On September 28, there is be a memorial service in Ann Arbour
for Hugh Daniel, manager of the old IPsec FreeS/WAN Project.
Various crypto people will attend, including a bunch of us
from freeswan. Hugh would have loved nothing better than his
memorial service being used as a focal point to talk about
"new OE", so that's what we will do on Saturday and Sunday.
If you are interested in attending, feel free to contact me.


In light of the NSA achievements, a few people asked about the FreeS/WAN
IPsec OE efforts and whatever happened to it.

The short answer is, we failed and got distracted. The long answer follows
below. At the end I will talk about the current plans that have lingered in
the last two years to revive this initiative. Below I will use the word "we" a
lot. Its meaning changes based on the context as various communities touched,
merged, intersected and drifted apart.

OE in a nutshell

For those not familiar with IPsec OE as per FreeS/WAN implementation. When
activated, a host would install a blocking policy for 0.0.0.0/0. Every
packet to an IP address would trigger the kernel to hold the packet and
signal the IKE daemon to go find an IPsec policy for that destination. If
found, the tunnel would be build, and an IPsec tunnel to the remote IP
would be established, and packets would flow. If no policy was found,
a "pass" hole was poked so packets would go out unencrypted. Public
keys for IP addresses were looked up in the reverse DNS by the IKE
daemon based on the destination address. To help with roaming clients
(roadwarriors), initiators could store their public key in their FQDN,
and convey their FQDN as ID when performing IKE so the remote peer could
look up their public key in the forward DNS. This came at the price of
two dynamic clients not being able to do OE to each other. (turns out
they couldn't anyway, because of NAT)


What were the reasons for failing to encrypt the internet with OE IPsec
(in no particular order)

1) Fragmentation of IPsec kernel stacks

In part due to the early history of FreeS/WAN combined with the export
restrictions at the time. Instead of spending more time on IKE and key
management for large scale enduser IPsec, we ended up wasting a lot of
time fixing the FreeS/WAN KLIPS IPsec stack module for each Linux release.
Another IPsec stack, which we dubbed XFRM/NETKEY appeared around 2.6.9 and
was backported to 2.4.x. It was terribly incomplete and severely broken.
With KLIPS not being within the kernel tree, it was never taken into
account.  XFRM/NETKEY remained totally unsuitable for OE for a decade.
XFRM/NETKEY now has almost all functionality needed - I found out today
it shoudl finally have first+last packet caching for dynamic tunnels,
which are essential for OE. Since the application's first packet triggered
the IKE mechanism, the application would start retransmitting before IKE
was completed.  Even when the tunnel finally came up, the application
was usually still waiting on that TCP retransmit.  David McCullough and
I still spend a lot of time fixing up KLIPS to work with the current
Linux kernel. Look at ipsec_kversion.h just to see what a nightmare
it has been to support Linux 2.0 to 2.6 (libreswan removed support for
anything lower then recent 2.4.x kernels)

Linux IPsec Crypto hardware acceleration in practise is only possible
with KLIPS + OCF, as the mainstraim async crypto is lacking in hardware
driver support. If you want to build OE into people's router/modem/setup
box, this is important, though admittingly less so as time has moved on
and even embedded hardware and phones are multicore or have special crypto
CPU instructions.

An effort to make the kernel the sole provider of crypto algorithms that
everyone could use also failed, and the idea was abandoned when CPU crypto
instructions appeared directly accessable from userland.

2) US citizens could not contribute code or patches to FreeS/WAN

This was John Gilmore's policy to ensure the software remained free for
US citizens. If no US citizen touched the code, it would be immune to any
presidential National Security Letter. I believe this was actually the
main reason for KLIPS not going in mainstream kernel, although personal
egos of kernel people seemed to have played a role here as well. Freeswan
people really tried had in 2000/2001 to hook KLIPS into the kernel
just the way the kernel people wanted. (Ironically, the XFRM/NETKEY
hook so bad, it even confuses tcpdump and with it every sysadmin trying
to see whether or not their traffic is encrypted) I still don't fully
understand why it was never merged, as the code was GPL, and it should
have just been merged in, even against John's wishes. Someone would
have stepped in as maintainer - after all the initial brunt of the work
had been done and we had a functional IPsec stack.

In the summer of 2003, I talked to John

Re: [Cryptography] Squaring Zooko's triangle

2013-09-11 Thread Peter Fairbrother

On 11/09/13 12:23, Paul Crowley wrote:

 From the title it sounds like you're talking about my 2007 proposal:

http://www.lshift.net/blog/2007/11/10/squaring-zookos-triangle
http://www.lshift.net/blog/2007/11/21/squaring-zookos-triangle-part-two

This uses key stretching to increase the work of generating a colliding
identifier from 2^64 to 2^88 steps.



That part is similar, though I go from 80 bits (actually 79.3 bits) to 
100 bits ; and a GPG key fingerprint is similar too, though my mashes 
are shorter than either, in order to make them easy to input.


There is another difference, mashes are easy to write and input without 
error - the mash alphabet only has 31 characters; A-Z plus 0-9, but 0=O, 
1=I=J=L, 2=Z, 5=S. If one of those is misread as another in the subset 
it doesn't matter when the mash is input. Capitalisation is also irrelevant.





However the main, big, huge difference is that a mash isn't just a hash 
of a public key - in fact as far as Alice, who doesn't understand public 
keys, is concerned:


It's just a secure VIOP number.

Maybe she needs an app to use the number on her iphone or googlephone. 
And another app to use it on her laptop or desktop - but the mash is 
your secure VOIP number.


Or it's a secure email address.

Or it's both.

Alice need not ever see the "real" voip IP address, or the real email 
address - and unless she's a cryptographer and hacker she simply won't 
be able to contact you without using strong authenticated end-to-end 
encryption - if the only address she has for you is your mash.





Contrast this with your proposal, or a PGP finger print. In order to use 
one of these, Alice has to have an email address or telephone number to 
begin with. She also has to find the key and compare it with the hash, 
in order to use it securely - but she can use the email address or 
telephone number without ever thinking about downloading or checking the 
public key.


That's just not possible is all you give out is mashes.



It's looking at the mash as an address, not as a public key or an 
adjunct to a public key service - which is why I think it's kind-of 
turning Zooko's Triangle on it's head (I had never heard of ZT before :( 
- but I know Zooko though, hi Zooko!).


Or maybe not, looking at the web I see ZT in several slightly different 
forms.


But it probably is turning the OP's problem - the napkin scribble - on 
it's head. You don't write your email and fingerprint on the napkin - 
just the mash.




-- Peter Fairbrother

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-11 Thread Alan Braggins

On 10/09/13 15:58, james hughes wrote:

On Sep 9, 2013, at 9:10 PM, Tony Arcieri mailto:basc...@gmail.com>> wrote:

On Mon, Sep 9, 2013 at 9:29 AM, Ben Laurie mailto:b...@links.org>> wrote:

And the brief summary is: there's only one ciphersuite left that's
good, and unfortunately its only available in TLS 1.2:

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

A lot of people don't like GCM either ;)


Yes, GCM does have implementation sensitivities particularly around the
IV generation. That being said, the algorithm is better than most and
the implementation sensitivity obvious (don't ever reuse an IV).


I think the difficulty of getting a fast constant time implementation on
platforms without AES-NI type hardware support are more of a concern.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Perry E. Metzger
On Wed, 11 Sep 2013 06:49:45 +0200 Raphael Jacquot
 wrote:
> according to http://en.wikipedia.org/wiki/Padding_(cryptography) ,
> most protocols only talk about padding at the end of the cleartext
> before encryption. now, how about adding some random at the
> beginning of the cleartext, say, 2.5 times the block size, that is
> 40 bytes for the example above, of random stuff before the
> interesting text appears ?

The padding at the end is to make sure that you have a full block of
data for a block cipher, since your actual message will usually be
shorter than a full block. In symmetric systems, it is not per se a
security feature. (Asymmetric 

Adding padding at the front to prevent cryptanalysts from using cribs
(that is, known plaintext) seems useless to me. Even if the padding
was of random length, it is of necessity going to be short. If you
have a technique that depends on known plaintext, crib dragging (that
is, trying all of the small number of possibilities) is easy.


Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Tim Dierks
On Wed, Sep 11, 2013 at 1:13 PM, Jerry Leichter  wrote:

> On Sep 11, 2013, at 9:16 AM, "Andrew W. Donoho"  wrote:
> > Yesterday, Apple made the bold, unaudited claim that it will never save
> the fingerprint data outside of the A7 chip.
> By announcing it publicly, they put themselves on the line for lawsuits
> and regulatory actions all over the world if they've lied.
>
> Realistically, what would you audit?  All the hardware?  All the software,
> including all subsequent versions?
>
> This is about as strong an assurance as you could get from anything short
> of hardware and software you build yourself from very simple parts.


When it comes to litigation or actual examination, it's been demonstrated
again and again that people can hide behind their own definitions of terms
that you thought were self-evident. For example, the NSA's definition of
"target", "collect", etc., which fly in the fact of common understanding,
and exploit the loopholes in English discourse. People can lie to you
without actually uttering a demonstrable falsehood or exposing themselves
to liability, unless you have the ability to cross-example the assertions.

I don't have a precise cite for the Apple claim, but let's take two
summaries: first, from Andrew "Apple made the bold, unaudited claim that it
will never save the fingerprint data outside of the A7 chip". Initial
questions: does this mean they won't send the data to third parties? How
about give third parties the ability to extract the data themselves? Does
the phrase "fingerprint data" include all data derived from the
fingerprint, such as minutiae?

second, from 
Macworld:
"the fingerprint data is encrypted and locked in the device’s new A7 chip,
that it’s never directly accessible to software and that it’s not stored on
Apple’s servers or backed up to iCloud". Similar questions: is the data
indirectly accessible? Is it stored on non-Apple servers? Etc.

Unless you can cross-examine the assertions with some kind of penalty for
dissembling, you can't be sure that an assertion means what you think or
hope it means, regardless of how straightforward and direct it sounds.

 - Tim
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Usage models (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread James A. Donald

On 08/09/2013 21:51, Perry E. Metzger wrote:
> I wrote about this a couple of weeks ago, see:
>
> http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html

In short, https to a server that you /do/ trust.

Problem is, joe average is not going to set up his own server. Making 
setting up your own server user friendly is the same problem as making 
OTR user friendly, with knobs on.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Phillip Hallam-Baker
On Tue, Sep 10, 2013 at 3:56 PM, Bill Stewart wrote:

> At 11:33 AM 9/6/2013, Peter Fairbrother wrote:
>
>> However, while the case for forward secrecy is easy to make, implementing
>> it may be a little dangerous - if NSA have broken ECDH then
>> using it only gives them plaintext they maybe didn't have before.
>>
>
> I thought the normal operating mode for PFS is that there's an initial
> session key exchange (typically RSA) and authentication,
> which is used to set up an encrypted session, and within that session
> there's a DH or ECDH key exchange to set up an ephemeral session key,
> and then that session key is used for the rest of the session.
> If so, even if the NSA has broken ECDH, they presumably need to see both
> Alice and Bob's keyparts to use their break,
> which they can only do if they've cracked the outer session (possibly
> after the fact.)
> So you're not going to leak any additional plaintext by doing ECDH
> compared to sending the same plaintext without it.



One advantage of this approach is that we could use RSA for one and ECC for
the other and thus avoid most consequences of an RSA2048 break (if that is
possible).

The problem I see reviewing the list is that ECC has suddenly become
suspect and we still have doubts about the long term use of RSA.


It also have the effect of pushing the ECC IPR concerns off the CA and onto
the browser/server providers. I understand that many have already got
licenses that allow them to do what they need in that respect.

Perfect Forward Secrecy is not perfect. In fact it is no better than
regular public key. The only difference is that if the public key system is
cracked then with PFS the attacker has to break every single key exchange
and not just the keys in the certificates and if you use an RSA outer with
an ECC inner then you double the cryptanalytic cost of the attack (theory
as well as computation).


I think this is the way forward.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Summary of the discussion so far

2013-09-11 Thread Phillip Hallam-Baker
I have attempted to produce a summary of the discussion so far for use as a
requirements document for the PRISM-PROOF email scheme. This is now
available as an Internet draft.

http://www.ietf.org/id/draft-hallambaker-prismproof-req-00.txt

I have left out acknowledgements and references at the moment. That is
likely to take a whole day going back through the list and I wanted to get
this out.

If anyone wants to claim responsibility for any part of the doc then drop
me a line and I will have the black helicopter sent round.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 10:37 AM, Adam Langley wrote:

On Tue, Sep 10, 2013 at 10:59 PM, William Allen Simpson
 wrote:

Or you could use 16 bytes, and cover all the input fields  There's no
reason the counter part has to start at 1.


It is the case that most of the bottom row bits will be zero. However,
ChaCha20 is assumed to be secure at a 256-bit security level when used
as designed, with the bottom row being counters. If ChaCha/Salsa were
not secure in this formulation then I think they would have to be
abandoned completely.


I kinda covered this in a previous message.  No, we should design with
the expectation that there's something wrong with every cipher (and
every implementation), and strengthen it as best we know how.

It's the same principle we learned (often the hard way) in school:
 * Software designers, assume the hardware has intermittent failures.
 * Hardware designers, assume the software has intermittent failures.



Taking 8 bytes from the initial block and using it as the nonce for
the plaintext encryption would mean that there would be a ~50% chance
of a collision after 2^32 blocks. This issue affects AES-GCM, which is
why the sequence number is used here.


Sorry, you're correct there -- my mind is often still thinking of DES
with its unicity distance of 2**32, so you had to re-key anyway.



Using 16 bytes from the initial block as the full bottom row would
work, but it still assumes that we're working around a broken cipher
and it prohibits implementations which pipeline all the ChaCha blocks,
including the initial one. That may be usefully faster, although it's
not the implementation path that I've taken so far.


OK.  I see the pipeline stall.  But does poly1305 pipeline anyway?



There is an alternative formulation of Salsa/ChaCha that is designed
for random nonces, rather than counters: XSalsa/XChaCha. However,
since we have a sequence number already in TLS I've not used it.


Aha, I hadn't found this (XSalsa, there doesn't seem to be an XChaCha).
Good reading, and some of the same points I was trying to make here.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Laws and cryptography

2013-09-11 Thread Grégory Alvarez
Hello,

Over the past year I was in contact with different cryptographers (I was 
designing a new symmetric algorithm) and they all told me in order to publish 
it no governmental authorization was needed. They also told me that they 
publish paper all the time without having an authorization.

However there is the Wassenaar Arrangement between US, Europe and other 
countries that regulate the export and use of cryptography 
(http://www.wassenaar.org/introduction/index.html).

The Article 3 of the chapter 2 of the european law says : An authorisation 
shall be required for the export of the dual-use items listed in Annex I 
(http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2009:134:0001:0269:en:PDF).

What they consider dual-use items is A ′′symmetric algorithm′′ employing a key 
length in excess of 56 bits 
(http://www.wassenaar.org/controllists/2012/WA-LIST%20%2812%29%201/08%20-%20WA-LIST%20%2812%29%201%20-%20Cat%205P2.doc).

The department of the ministry of defense that handle this regulation can't 
answer if publishing a cryptographic algorithm needs an authorization. However 
the Wassenaar Arrangement clearly says that material, software and technology 
need an authorization to be exported / published.

What is actually the status of the law about cryptography and publishing new 
algorithms ? Is the cryptographer that publish a paper without governmental 
authorization an outlaw ?___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Suite B after today's news

2013-09-11 Thread Peter Gutmann
Ben Laurie  writes:

>Feel free to argue the toss with IANA:
>http://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml

Hmm, I talked to  earlier this year to let him 
know that 0x10 was already being used, I'd assumed it'd be marked as RFU, but 
I guess not.  Maybe we could use 36, which follows on from the SessionTicket 
one at 35 and is far enough up that it won't be overrun for awhile.

Annoying though, there are already deployed implementations that use 0x10.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Defenses against pervasive versus targeted intercept

2013-09-11 Thread Phillip Hallam-Baker
I have spent most of yesterday writing up much of the traffic on the list
so far in the form of an Internet Draft.

I am now at the section on controls and it occurs to me that the controls
relevant to preventing PRISM-like pervasive intercept capabilities are not
necessarily restricted to controls that protect against targeted intercept.

The problem I have with PRISM is that it is a group of people whose
politics I probably find repellent performing a dragnet search that may
later be used for McCarthyite/Hooverite inquisitions. So I am much more
concerned about the pervasive part than the ability to perform targeted
attacks on a few individuals who have come to notice. If the NSA wanted my
help intercepting Al Zawahiri's private emails then sign me up. My problem
is that they are intercepting far too much an lying about what they are
doing.


Let us imagine for the sake of argument that the NSA has cracked 1024 bit
RSA using some behemoth computer at a cost of roughly $1 million per key
and taking a day to do so. Given such a capability it would be logical for
them to attack high traffic/high priority 1024 bit keys. I have not looked
into the dates when the 2048 bit roll out began (seems to me we have been
talking about it ten years) but that might be consistent with that 2010
date.

If people are using plain TLS without perfect forward secrecy, that crack
gives the NSA access to potentially millions of messages an hour. If the
web browsers are all using PFS then the best they can do is one message a
day.

PFS provides security even when the public keys used in the conversation
are compromised before the conversation takes place. It does not prevent
attack but it reduces the capacity of the attacker.


Similar arguments can be made for other less-than-perfect key exchange
schemes. It is not necessary for a key exchange scheme to be absolutely
secure against all possible attack for it to be considered PRISM-Proof.

So the key distribution scheme I am looking at does have potential points
of compromise because I want it to be something millions could use rather
than just a few thousand geeks who will install but never use. But the
objective is to make those points of compromise uneconomic to exploit on
the scale of PRISM.


The NSA should have accepted court oversight of their activities. If they
had strictly limited their use of the cryptanalytic capabilities then the
existence would not have been known to low level grunts like Snowden and we
probably would not have found out.

Use of techniques like PFS restores balance.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Books on modern cryptanalysis

2013-09-11 Thread Andrew Righter
as amazing.  I remember reading about 
> attacks that involved running chips at lower voltage than they were 
> supposed to have and that somehow allowed them to be compromised, etc.

Voltage Glitching?

A neighborly paper:
 
http://events.ccc.de/congress/2008/Fahrplan/attachments/1191_goodspeed_25c3_bslc.pdf


> 
> Anyhow, are there any (not *too* technical) books on the modern 
> techniques for attacking cryptosystems?
> 
>  Thanks.   /bernie\
> 
> -- 
> Bernie Cosell Fantasy Farm Fibers
> mailto:ber...@fantasyfarm.com Pearisburg, VA
>-->  Too many people, too few sheep  <--   
> 
> 
> 
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Phillip Hallam-Baker
On Wed, Sep 11, 2013 at 2:40 PM, Bill Stewart wrote:

> At 10:39 AM 9/11/2013, Phillip Hallam-Baker wrote:
>
>> Perfect Forward Secrecy is not perfect. In fact it is no better than
>> regular public key. The only difference is that if the public key system is
>> cracked then with PFS the attacker has to break every single key exchange
>> and not just the keys in the certificates and if you use an RSA outer with
>> an ECC inner then you double the cryptanalytic cost of the attack (theory
>> as well as computation).
>>
>
> I wouldn't mind if it had been called Pretty Good Forward Secrecy instead,
> but it really is a lot better than regular public key.
>

My point was that the name is misleading and causes people to look for more
than is there. It took me a long time to work out how PFS worked till I
suddenly realized that it does not deliver what is advertised.



> The main difference is that cracking PFS requires breaking every single
> key exchange before the attack using cryptanalysis, while cracking the RSA
> or ECC outer layer can be done by compromising the stored private key,
> which is far easier to do using subpoenas or malware or rubber hoses than
> cryptanalysis.
>

That is my point precisely.

Though the way you put it, I have to ask if PFS deserves higher priority
than Certificate Transparency. As in something we can deploy in weeks
rather than years.

I have no problem with Certificate Transparency. What I do have trouble
with is Ben L.'s notion of Certificate Transparency and Automatic Audit in
the End Client which I imposes a lot more in the way of costs than just
transparency and moreover he wants to push out the costs to the CAs so he
can hyper-tune the performance of his browser.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Laws and cryptography

2013-09-11 Thread John Gilmore
> ... the Wassenaar Arrangement clearly says that
> material, software and technology need an authorization to be exported /
> published.
> 
> What is actually the status of the law about cryptography and publishing
> new algorithms ? Is the cryptographer that publish a paper without
> governmental authorization an outlaw

There is a tension between fundamental freedoms and crypto controls.
Often fundamental freedoms win (as they should).  The Wassenaar
Arrangement is a private agreement among a bunch of governments -- it
is not a treaty -- and has no legal force at all.  What matters are
the statutes in your own country, and how they are interpreted.

I don't know of any cryptographers who have been punished under crypto
export controls, anywhere in the world, for publishing papers about
encryption.  So invent your own cryptosystem if you want, write about
it, and publish!

Human-written software was considered to be different from
human-written papers for a while; in the US it took three court cases
(Bernstein v. US being the first winner) to sort this out.  In the
1990s, Europe did not control freely published ("mass-market and
public-domain") software, and by 2000 that was true in the US also.

Unless you want to find and pay a lawyer with relevant expertise, the
best way to get a more-or-less definitive answer for your particular
country is to look in Bert-Jaap Koops' "Crypto Law Survey".  He has
been maintaining it for decades, and actually did his PhD thesis on
global regulations about encryption.  See:

  http://cryptolaw.org/

> The department of the ministry of defense that handle this regulation
> can't answer if publishing a cryptographic algorithm needs an
> authorization.

Can't answer, or won't?  In the United States, both the NSA and the
agencies responsible for the export controls (State Department and
Commerce Department) have been known to lie to the public,
unofficially, about what is actually allowed.  Their tendency is to
talk you into assuming that you have no rights, even if the law is
clear that you do.  Or they will tie you up in knots over how you
might be able to comply with finicky regulations, without ever telling
you that you are exempt from those regulations.  We even caught them
lying officially once or twice (e.g. refusing export of Kerberos
authentication software on the bogus theory that someone, someday,
might adapt it to do encryption).

John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-11 Thread James A. Donald

On 2013-09-10 4:30 PM, ianG wrote:
The question of whether one could simulate a raw physical source is 
tantalising.  I see diverse opinions as to whether it is plausible, 
and thinking about it, I'm on the fence.


Let us consider that source of colored noise with which we are most 
familiar:  The human voice.  Efforts to realistically simulate a human 
voice have not been very successful.  The most successful approach has 
been the ransom note approach, merging together a lot of small clips of 
an actual human voice.


A software simulated raw physical noise source would have to run 
hundreds of thousands times faster.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Ramsey Dow
On Sep 11, 2013, at 6:16 AM, Andrew W. Donoho  wrote:
>   Yesterday, Apple made the bold, unaudited claim that it will never save 
> the fingerprint data outside of the A7 chip.

If you watch the video at http://www.apple.com/apple-events/september-2013/, 
Dan Riccio says at 61:08 that all fingerprint data is encrypted and stored in a 
"secure enclave" in the A7 SoC. The data is said to be accessable only by the 
TouchID sensor. He states that it is never available to other software, it's 
not stored on Apple servers, or backed up to iCloud. Although technical details 
are lacking at the moment, this "secure enclave" sounds a lot like a TPM to me. 
How will this be any different than storing a BitLocker key in TPM?

While it is true that NSA TAO has the capability of penetrating individual 
iPhones to potentially retrieve this data, it would be much easier to collect 
those fingerprints from other sources, like your house, or if you drive, the 
DMV database.

-- Ramsey
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 1:44 PM, Tim Dierks  wrote:
> When it comes to litigation or actual examination, it's been demonstrated 
> again and again that people can hide behind their own definitions of terms 
> that you thought were self-evident. For example, the NSA's definition of 
> "target", "collect", etc., which fly in the fact of common understanding, and 
> exploit the loopholes in English discourse. People can lie to you without 
> actually uttering a demonstrable falsehood or exposing themselves to 
> liability, unless you have the ability to cross-example the assertions.
I wouldn't take it quite that far.  Government agencies always claim broader 
leeway than is granted to private actors - and even for NSA and friends, 
exactly how the courts will react to that language parsing isn't clear.  Even 
their pet FISA court, we now know from declassified documents, has angrily 
rejected some of this game playing - a second report of this is in today's New 
York Times.  Not that it did much good in terms of changing behavior.  And 
Congress, of course, can interpret stuff as it likes.

The standard for civil lawsuits and even more so for regulatory actions is 
quite a bit lower.  If Apple says "no fingerprint information leaves the 
phone", it's going to be interpreted that way.  Another article in today's 
Times reports that Google has had its argument that WiFi is "radio" hence not 
subject to wiretap laws soundly rejected by an appeals court.

> I don't have a precise cite for the Apple claim...
http://www.youtube.com/watch?v=TJkmc8-eyvE, starting at about 2:20, is one 
statement.  I won't try to transcribe it here, but short of a technical paper, 
it's about a strong and direct a statement as you're going to get.  People have 
a queasy feeling about fingerprint recognition.  If Apple wants to get them to 
use it, they have to reassure them.  It's a basic result of game theory that 
the only way you can get people to believe you is to put yourself in harm's way 
if you lie.

> ...[I]s the data indirectly accessible?
What counts as indirect access?  In one sense, the answer to this question is 
yes:  You can use your fingerprint to authorize purchases - at least from the 
iTunes/App store.  It's completely unclear - and Apple really should explain - 
how the authentication flows work.  Since they promise to keep your fingerprint 
information on the phone, they can't be sending it off to their store servers.  
On the other hand, if it's a simple "go ahead and authorize a charge for user 
U" message, what's to prevent someone from faking such a message?  (In other 
words:  If the decision is made on the phone, how do you authenticate the 
phone?)

> ...Unless you can cross-examine the assertions with some kind of penalty for 
> dissembling, you can't be sure that an assertion means what you think or hope 
> it means, regardless of how straightforward and direct it sounds.
Courts are not nearly as willing to let those before them hide behind 
complicated and unusual word constructions as you think - especially not when 
dealing with consumers.  It is, in fact, a standard rule of contract law that 
any ambiguity is to be interpreted contrary to the interests of the drafter.

None of this is specific to Apple.  Commerce depends on trust, enforced 
ultimately by courts of law.  The ability to govern depends on the consent of 
the governed - yes, even in dictatorships; they survive as long as enough of 
the population grudgingly accedes, and get toppled when they lose too much of 
the consent.  And consent ultimately also requires trust.

The games the intelligence community have been playing are extremely corrosive 
to that trust, which is why they are so dangerous.  But we have to get beyond 
that and not see attackers behind every door.  The guys who need to be stopped 
run the NSA and friends, not Apple or Facebook or Google - or even the Telco's.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Books on modern cryptanalysis

2013-09-11 Thread Jonathan Katz

On Wed, 11 Sep 2013, Bernie Cosell wrote:


Anyhow, are there any (not *too* technical) books on the modern
techniques for attacking cryptosystems?


Really depends what you mean by "attacking"; there are attacks at the 
protocol level (e.g., padding-oracle attacks), at the crypto level (e.g., 
differential cryptanalysis), and at the physical level (e.g., side-channel 
attacks).


As a general introduction to modern crypto that covers the first two 
categories a bit, I recommend "Introduction to Modern Cryptography" by 
myself and Y. Lindell (soon to come out with a 2nd edition containing even 
more attacks!).


For block-cipher cryptanalysis, I have been very impressed by the material 
in "The Block Cipher Companion" by Knudsen and Robshaw.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Bill Stewart

At 10:39 AM 9/11/2013, Phillip Hallam-Baker wrote:
Perfect Forward Secrecy is not perfect. In fact it is no better than 
regular public key. The only difference is that if the public key 
system is cracked then with PFS the attacker has to break every 
single key exchange and not just the keys in the certificates and if 
you use an RSA outer with an ECC inner then you double the 
cryptanalytic cost of the attack (theory as well as computation).


I wouldn't mind if it had been called Pretty Good Forward Secrecy 
instead, but it really is a lot better than regular public key.
The main difference is that cracking PFS requires breaking every 
single key exchange before the attack using cryptanalysis, while 
cracking the RSA or ECC outer layer can be done by compromising the 
stored private key, which is far easier to do using subpoenas or 
malware or rubber hoses than cryptanalysis.


(Of course, any messages that were saved by the sender or recipient 
can still be cracked by non-cryptanalytic techniques as well, but 
that's a separate problem.)


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-11 Thread Yaron Sheffer

On 09/11/2013 12:54 PM, Alan Braggins wrote:

On 10/09/13 15:58, james hughes wrote:

On Sep 9, 2013, at 9:10 PM, Tony Arcieri mailto:basc...@gmail.com>> wrote:

On Mon, Sep 9, 2013 at 9:29 AM, Ben Laurie mailto:b...@links.org>> wrote:

And the brief summary is: there's only one ciphersuite left that's
good, and unfortunately its only available in TLS 1.2:

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

A lot of people don't like GCM either ;)


Yes, GCM does have implementation sensitivities particularly around the
IV generation. That being said, the algorithm is better than most and
the implementation sensitivity obvious (don't ever reuse an IV).


I think the difficulty of getting a fast constant time implementation on
platforms without AES-NI type hardware support are more of a concern.


Is this any different from plain old AES-CBC?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] NIST reopens RNG public comment period

2013-09-11 Thread Eugen Leitl

http://csrc.nist.gov/publications/PubsDrafts.html

Sep. 9, 2013

SP 800-90 A Rev 1 B and C

DRAFT Draft SP 800-90 Series: Random Bit Generators 
800-90 A Rev. 1: Recommendation for Random Number Generation Using 
Deterministic Random Bit Generators 
800-90 B: Recommendation for the Entropy Sources Used for Random Bit Generation 
800-90 C: Recommendation for Random Bit Generator (RBG) Constructions

In light of recent reports, NIST is reopening the public comment period for 
Special Publication 800-90A and draft Special Publications 800-90B and 800-90C.
NIST is interested in public review and comment to ensure that the 
recommendations are accurate and provide the strongest cryptographic 
recommendations possible.
The public comments will close on November 6, 2013. Comments should be sent to 
rbg_comme...@nist.gov. 
 
In addition, the Computer Security Division has released a supplemental ITL 
Security Bulletin titled "NIST Opens Draft Special Publication 800-90A, 
Recommendation for Random Number Generation Using Deterministic Random Bit 
Generators, For Review and Comment (Supplemental ITL Bulletin for September 
2013)" to support the draft revision effort.

Draft SP 800-90 A Rev. 1 (721 KB) 
Draft SP 800-90 B (800 KB) 
Draft SP 800-90 C (1.1 MB)


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Books on modern cryptanalysis

2013-09-11 Thread Max Kington
On 11 Sep 2013 18:37, "Bernie Cosell"  wrote:
>
> The recent flood of discussions has touched on many modern attacks on
> cryptosystems.   I'm long out of the crypto world [I last had a crypto
> clearance *before* differential cryptanalysys was public info!].  Attacks
> that leak a bit at a time strike me as amazing.  I remember reading about
> attacks that involved running chips at lower voltage than they were
> supposed to have and that somehow allowed them to be compromised, etc.
>
> Anyhow, are there any (not *too* technical) books on the modern
> techniques for attacking cryptosystems?

How modern is modern? :-)

I have modern cryptanalysys by Christopher Swenson (or at least did have
before it was loaned and I moved) and it was an excellent book and
crucially was very accessible. Also available in kindle format now. It is 5
years old now though.

Regards
Max

>
>   Thanks.   /bernie\
>
> --
> Bernie Cosell Fantasy Farm Fibers
> mailto:ber...@fantasyfarm.com Pearisburg, VA
> -->  Too many people, too few sheep  <--
>
>
>
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Viktor Dukhovni
On Tue, Sep 10, 2013 at 12:56:16PM -0700, Bill Stewart wrote:

> I thought the normal operating mode for PFS is that there's an
> initial session key exchange (typically RSA) and authentication,
> which is used to set up an encrypted session, and within that
> session there's a DH or ECDH key exchange to set up an ephemeral
> session key, and then that session key is used for the rest of the
> session.

This is not the case in TLS.  The EDH or EECDH key exchange is
performed in the clear.  The server EDH parameters are signed with
the server's private key.

https://tools.ietf.org/html/rfc2246#section-7.4.3

In TLS with EDH (aka PFS) breaking the public key algorithm of the
server certificate enables active attackers to impersonate the
server (including MITM attacks).  Breaking the Diffie-Hellman or
EC Diffie-Hellman algorithm used allows a passive attacker to
recover the session keys (break must be repeated for each target
session), this holds even if the certificate public-key algorithm
remains secure.

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Books on modern cryptanalysis

2013-09-11 Thread Bernie Cosell
The recent flood of discussions has touched on many modern attacks on 
cryptosystems.   I'm long out of the crypto world [I last had a crypto 
clearance *before* differential cryptanalysys was public info!].  Attacks 
that leak a bit at a time strike me as amazing.  I remember reading about 
attacks that involved running chips at lower voltage than they were 
supposed to have and that somehow allowed them to be compromised, etc.

Anyhow, are there any (not *too* technical) books on the modern 
techniques for attacking cryptosystems?

  Thanks.   /bernie\

-- 
Bernie Cosell Fantasy Farm Fibers
mailto:ber...@fantasyfarm.com Pearisburg, VA
-->  Too many people, too few sheep  <--   



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] soft chewy center

2013-09-11 Thread bmanning
On Tue, Sep 10, 2013 at 07:05:40PM -0400, Perry E. Metzger wrote:
> On Tue, 10 Sep 2013 21:58:28 + bmann...@vacation.karoshi.com
> wrote:
> > some years back, i was part of a debate on the relative value of
> > crypto - and it was pointed out that for some sectors,  crypto
> > ensured _failure_ simply because processing the bits introduced
> > latency.  for these sectors, speed was paramount.
> > 
> > think HFT or any sort of "Flash Mob" event where you want in/out as
> > quickly as possible.  
> 
> The latency cost of a stream cipher implemented in hardware can be as
> little as the time it takes a single XOR gate to operate -- which is
> to say, low even by the standards of my friends who do high frequency
> trading (many of whom do, in fact, claim to encrypt most of their
> communications).

latency effect should, as you state, be a factor in which 
tool gets used.  for the HFT crowd, i'm fairly confident they
are talking about channel protection - they have a fairly simple
and easily scoped topology.  

> Certainly crypto is not the only (or even most important) way to make
> systems secure. In breaking in to a system, implementation bugs are
> where you look, not cracking cipher keys. However, latency qua
> latency seems like a poor reason to avoid encrypting your traffic. It
> might, of course, be a reason to avoid certain architectural
> decisions in how you use the crypto -- a public key operation per
> packet would clearly add unacceptable latency in many
> applications.

agreed.

> 
> 
> Perry
> -- 
> Perry E. Metzger  pe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] SPDZ, a practical protocol for Multi-Party Computation

2013-09-11 Thread Eugen Leitl

http://www.mathbulletin.com/research/Breakthrough_in_cryptography_could_result_in_more_secure_computing.asp

Breakthrough in cryptography could result in more secure computing
(9/10/2013)

Tags: computer science, research, security, cryptography

Nigel Smart, Professor of Cryptology 

New research to be presented at the 18th European Symposium on Research in
Computer Security (ESORICS 2013) this week could result in a sea change in
how to secure computations.

The collaborative work between the University of Bristol and Aarhus
University (Denmark) will be presented by Bristol PhD student Peter Scholl
from the Department of Computer Science.

The paper, entitled 'Practical covertly secure MPC for dishonest majority -
or: Breaking the SPDZ limits', builds upon earlier joint work between Bristol
and Aarhus and fills in the missing pieces of the jigsaw from the groups
prior work that was presented at the CRYPTO conference in Santa Barbara last
year.

The SPDZ protocol (pronounced "Speedz") is a co-development between Bristol
and Aarhus and provides the fastest protocol known to implement a theoretical
idea called "Multi-Party Computation".

The idea behind Multi-Party Computation is that it should enable two or more
people to compute any function of their choosing on their secret inputs,
without revealing their inputs to either party. One example is an election,
voters want their vote to be counted but they do not want their vote made
public.

The protocol developed by the universities turns Multi-Party Computation from
a theoretical tool into a practical reality. Using the SPDZ protocol the team
can now compute complex functions in a secure manner, enabling possible
applications in the finance, drugs and chemical industries where computation
often needs to be performed on secret data.

Nigel Smart, Professor of Cryptology in the University of Bristol's
Department of Computer Science and leader on the project, said: "We have
demonstrated our protocol to various groups and organisations across the
world, and everyone is impressed by how fast we can actually perform secure
computations.

"Only a few years ago such a theoretical idea becoming reality was considered
Alice in Wonderland style over ambitious hope. However, we in Bristol
realised around five years ago that a number of advances in different areas
would enable the pipe dream to be achieved. It is great that we have been
able to demonstrate our foresight was correct."

The University of Bristol is now starting to consider commercialising the
protocol via a company Dyadic Security Limited, co-founded by Professor Smart
and Professor Yehuda Lindell from Bar-Ilan University in Israel.

Note: This story has been adapted from a news release issued by the
University of Bristol


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Squaring Zooko's triangle

2013-09-11 Thread Paul Crowley
>From the title it sounds like you're talking about my 2007 proposal:

http://www.lshift.net/blog/2007/11/10/squaring-zookos-triangle
http://www.lshift.net/blog/2007/11/21/squaring-zookos-triangle-part-two

This uses key stretching to increase the work of generating a colliding
identifier from 2^64 to 2^88 steps.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Jerry Leichter
On Sep 10, 2013, at 10:57 PM, ianG wrote:
> In a protocol I wrote with Zooko's help, we generate a random IV0 which is 
> shared in the key exchange.
> 
> http://www.webfunds.org/guide/sdp/sdp1.html
> 
> Then, we also move the padding from the end to the beginning, fill it with a 
> non-repeating length-determined value, and expand it to a size of 16-31 
> bytes.  This creates what is in effect an IV1 or second transmitted IV.
> 
> http://www.webfunds.org/guide/sdp/pad.html
You should probably look at the Rogoway paper I found after Perry pushed me to 
give a reference.  Yes, CBC with a true random IV is secure, though the 
security guarantee you can get if you don't also do authentication is rather 
weak.  The additional padding almost certainly doesn't help or hurt.  (I won't 
say that any more strongly because I haven't look at the proofs.)

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Nemo
Jerry Leichter  writes:

> Phil Rogoway has a paper somewhere discussing the right way to
> implement cryptographic modes and API's.  In particular, he recommends
> changing the definition of CBC from:
>
> E_0 = IV # Not transmitted
> E_{i+1} = E(E_i XOR P_{i+1})

Not sure what "not transmitted" means here. In typical CBC
implementations, the IV is certainly transmitted...

> to
>
> E_0 = E(IV)  # Not transmitted
> E_{i+1} = E(E_i XOR P_{i+1})

As written, this does nothing to deny plaintext/ciphertext pairs further
along in the stream. Typical encrypted streams have lots of
mostly-predictable data (think headers), not just the first 16 bytes.

I agree with Perry; a reference to a paper would be nice.

> the known attack (whose name escapes me - it was based on an attacker
> being able to insert a prefix to the next segment because he knows the
> IV it will use before it gets sent)

I think you mean BEAST.

The security proof of CBC against Chosen Plaintext Attack requires that
the IV be unpredictable to the attacker. (I am working my way through
Dan Boneh's lectures on Coursera. Great stuff.) This was a "purely
academic" consideration, until BEAST came along.

Which leads to a personal pet peeve... If NSA is your adversary, then
**there is no such thing as a "purely academic" attack**. Any weakness,
no matter how theoretical, is worth avoiding if feasible. Implementors
keep making this mistake again and again -- "it's a purely academic
attack because blah blah blah so relax" -- and then something bad
happens years later. It would be nice if we could all finally learn this
lesson.

Back to CBC mode and secret IVs. I do not think we will too find much
guidance from the academic side on this, because they tend to "assume a
can opener"... Er, I mean a "secure block cipher"... And given that
assumption, all of the usual modes are provably secure with cleartext
IVs. Nonetheless, there is no danger in keeping IVs secret, so why not?
Negotiating 512 bits of secret costs little more than 256. So just
negotiate the IVs. Or, more plausibly, negotiate a second key to encrypt
the IVs. (Since you never reuse an IV anyway, ECB mode for the IVs is
fine.)

All of this is secondary to securing the key exchange, of course. That
part is much more scary because NSA's math skills are scary. In my
opinion, it is virtually certain NSA knows something about integer
factoring and/or integer discrete log and/or elliptic curves that we do
not. So I would build in some margin.  I would start with 3072 bits for
RSA/DH and 384 bits for ECC and only go up from there...

 - Nemo
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Why prefer symmetric crypto over public key crypto?

2013-09-11 Thread zooko
I agree that randomness-reuse is a major issue. Recently about 55 Bitcoin were
stolen by exploiting this, for example:

http://emboss.github.io/blog/2013/08/21/openssl-prng-is-not-really-fork-safe/

However, it is quite straightforward to make yourself safe from re-used nonces
in (EC)DSA, like this:

https://github.com/trezor/python-ecdsa/commit/8efb52fad5025ae87b649ff78faa9f8076768065

Whenever the public-key crypto spec says that you have to come up with a random
number, don't do it! Instead of just pulling a random number from your PRNG,
mix the message into your PRNG to generate a random number which will therefore
be unique to this message.

Note that you don't have to get anyone else's cooperation in order to do this
-- interoperating implementations can't tell how you chose your "random"
number, so they can't complain if you do it this way.

Wei Dai's Crypto++ library has done this for ages, for *all* nonces generated
in the course of public-key operations.

DJB's Ed25519 takes this one step further, and makes the nonce determined
*solely* by the message and the secret key, avoiding the PRNG part altogether:

http://ed25519.cr.yp.to/papers.html

In my opinion, that's the way to go. It applies equally well to (EC)DSA, and
still enjoys the above-mentioned interoperability.

There is now a standard for this fully-deterministic approach in the works,
edited by Thomas Pornin: https://tools.ietf.org/html/rfc6979 .

Therefore, Ed25519 or RFC-6979-enhanced (EC)DSA is actually safer than RSA-PSS
is with regard to this issue.

Regards,

Zooko
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Salz, Rich
> Yesterday, Apple made the bold, unaudited claim that it will never save the 
> fingerprint data outside of the A7 chip.
> Why should we trust Cook & Co.?

I'm not sure it matters.  If I want your fingerprint, I'll lift it off your 
phone.

--  
Principal Security Engineer
Akamai Technology
Cambridge, MA



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Jerry Leichter
On Sep 11, 2013, at 9:16 AM, "Andrew W. Donoho"  wrote:
> Yesterday, Apple made the bold, unaudited claim that it will never save the 
> fingerprint data outside of the A7 chip.
By announcing it publicly, they put themselves on the line for lawsuits and 
regulatory actions all over the world if they've lied.

Realistically, what would you audit?  All the hardware?  All the software, 
including all subsequent versions?

This is about as strong an assurance as you could get from anything short of 
hardware and software you build yourself from very simple parts.

> Why should we trust Cook & Co.? They are subject to the laws of the land and 
> will properly respond to lawful subpoenas. What are they doing to ensure the 
> user's confidence that they cannot spread my fingerprint data to the cloud?
Apparently not enough to give *you* confidence.  But concerned as I am with 
recent revelations, it doesn't particularly concern *me* nearly as much as many 
other attack modalities.

> These questions also apply to things like keychain storage. Who has audited 
> in a public fashion that Apple actually keeps keychains secure?
There's been some very limited auditing by outsiders.  I found one paper a 
while back that teased apart the format of the file and figured out how the 
encryption worked.  It appeared to be secure (if perhaps overly complicated), 
but damned if I can find the paper again.  (Searching these days turns up tons 
of articles that center about the fact that when a keychain is unlocked, you 
can read its contents.  The vulnerability issues are subtle, but they only 
apply at all if you're on the same machine as the unlocked keychain.)

It would be a nice thing if Apple described the algorithms used to encrypt 
keychains.  Perhaps this is the time to push them - and others - to be much 
more open about their security technologies.  Apple seems to be making a point 
of *selling* on the basis of those technologies, so may be particularly 
willing/vulnerable on this front.

> How do we know whether Apple has perverted under secret court order the 
> common crypto and other libraries in every phone and iPad?...
You don't.

Then again, you don't know if Intel has been forced to include something in its 
chips that allows someone with appropriate knowledge to download and run 
privileged code on your machine.  All modern Intel server chips include a 
special management mode exactly to allow remote control over servers in a large 
datacenter, regardless of how screwed up the software, including the OS 
software, on them gets.  Who's to say there isn't some other way to get into 
that code?

Who you choose to trust and how much is ultimately your call.  There are no 
answers to your questions.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] ADMIN: Please pick appropriate Subject lines...

2013-09-11 Thread Perry E. Metzger
A quick note: many recent postings with very useful content have gone
out with entirely inappropriate Subject: lines because of threads
shifting topics. Always look at your Subject: line and ask yourself
if it should be updated.

(And thank all of you for not top posting. It is appreciated.)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 10:27 AM, Adam Langley wrote:

[attempt two, because I bounced off the mailing list the first time.]

On Tue, Sep 10, 2013 at 9:35 PM, William Allen Simpson
 wrote:

Why generate the ICV key this way, instead of using a longer key blob
from TLS and dividing it?  Is there a related-key attack?


The keying material from the TLS handshake is per-session information.
However, for a polynomial MAC, a unique key is needed per-record and
must be secret.


Thanks, this part I knew, although it would be good explanatory text to
add to the draft.

I meant a related-key attack against the MAC-key generated by TLS?

Thereby causing you to discard it and not key the ICV with it?


Using stream cipher output as MAC key material is a
trick taken from [1], although it is likely to have more history than
that. (As another example, UMAC/VMAC runs AES-CTR with a separate key
to generate the per-record keys, as did Poly1305 in its original
paper.)


Oh sure.  We used hashes long ago.  Using AES is insane, but then
UMAC is -- to be kind -- not very efficient.

My old formulation from CBCS was developed during the old IPsec
discussions.  It's just simpler and faster to xor the per-packet counter
with the MAC-key than using the ChaCha cipher itself to generate
per-packet key expansion.

I was simply wondering about the rationale for doing it yourself.  And
worrying a little about the extra overhead on back-to-back packets.



If AEAD, aren't the ICV and cipher text generated in parallel?  So how do
you check the ICV first, then decipher?


The Poly1305 key (ICV in your terms?) is taken from a prefix of the
ChaCha20 stream output. Thus the decryption proceeds as:

1) Generate one block of ChaCha20 keystream and use the first 32 bytes
as a Poly1305 key.
2) Feed Poly1305 the additional data and ciphertext, with the length
prefixing as described in the draft.
3) Verify that the Poly1305 authenticator matches the value in the
received record. If not, the record can be rejected immediately.
4) Run ChaCha20, starting with a counter value of one, to decrypt the
ciphertext.


ICV = Integrity Check Value at the end of the packet.  So ICV-key.
Sometimes MAC-key.

Anyway, good explanation!  Please add it to the draft.



An alternative implementation is possible where ChaCha20 is run in one
go on a buffer that consists of 64 zeros followed by the ciphertext.
The advantage of this is that it may be faster because the ChaCha20
blocks can be pipelined. The disadvantage is that it may need memory
copies to setup the input buffer correctly. A moot advantage, in the
case of TLS, of the steps that I outlined is that forgeries are
rejected faster.


Depends on how swamped the processor.  I'm a big fan of rejecting
forgeries (and replay attacks) before decrypting.  Not everybody is
Google with unlimited processing power. ;)



Needs a bit more implementation details.  I assume there's an
implementation in the works.  (Always helps define things with
something concrete.)


I currently have Chrome talking to OpenSSL, although the code needs
cleanup of course.


Excellent

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-11 Thread Chris Palmer
On Tue, Sep 10, 2013 at 2:04 PM, Joe Abley  wrote:

> As an aside, I see CAs with Chinese organisation names in my browser list.

I wouldn't pick on/fear/call out the Chinese specifically.

Also, be aware that browsers must transitively trust all the issuers
that the known trust anchors have issued issuing certificates for.
That's a much bigger set, and is not currently fully knowable.
(Another reason to crave Certificate Transparency.)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread William Allen Simpson

On 9/11/13 6:00 AM, Alexandre Anzala-Yamajako wrote:

Chacha20 being  a stream cipher, the only requirement we have on the ICV is 
that it doesn't repeat isn't ?


You mean IV, the Initialization Vector.  ICV is the Integrity Check Value,
usually 32-64 bits appended to the packet.  Each is separately keyed.



This means that if there's a problem with setting 'mostly zeroed out' ICV for 
Chacha20 we shouldn't use it at all period.


I strongly disagree.  In my network protocol security designs, I always
try to think about weaknesses in the implementation and potential future
attacks on the algorithm -- and try to strengthen the security margin.

For example, IP-MAC fills every available zero space with randomness,
while H-MAC (defined more than a year later) uses constants instead.
IP-MAC was proven stronger than H-MAC.

Sadly, in the usual standards committee-itis, "newer" is often assumed to
be "improved" and "better".  So H-MAC was adopted instead.  Of course, we
know that H-MAC was chosen by an NSA mole in the IETF, so I don't trust it.

Also, there's a certain silliness in formal cryptology that assumes we
shouldn't have longer randomness keying than the formal "strength" of the
algorithm.  That might have been true in the days of silk and cyanide,
where keying was a hard problem, but modern computing can generate lots of
longer nonces without much effort.

In reality, adding longer nonces may not improve the "strength" of the
algorithm itself, but it improves the margin against attack.  A nearly
practical attack of order 2**80 could be converted to an impractical
attack of order 2**96



As far as your proposition is concerned, the performance penalty seems to 
largely depend on the target platform. Wouldn't using the same set of 
operations as Chacha prevent an unexpected performance drop in case of lots of 
short messages ?


I don't understand this part of your message.  My ancient CBCS
formulation that I'll probably use for PPP (Xor'ing a per-session key
with a per-packet unique value) is demonstrably much faster than using
ChaCha itself to do that same thing.

We've been using stream ciphers and pseudo-stream ciphers (made by
chaining MACs or chaining block ciphers) to create per-packet nonces
for as long as I can remember (over 20 years).  You'll see that in CHAP
and Photuris and CBCS.

So I'm not arguing with Adam's use of ChaCha for it.  It just bugs me
that we aren't filling in as much randomness as we could!

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Availability of plaintext/ciphertext pairs (was Re: In the face of "cooperative" end-points, PFS doesn't help)

2013-09-11 Thread Raphael Jacquot

On Sep 10, 2013, at 6:43 PM, Nemo  wrote:
> 
> "GET / HTTP/1.1\r\n" is exactly 16 bytes, or one AES block. If the IV is
> sent in the clear -- which it is -- that is one plaintext-ciphertext
> pair right there for every HTTPS connection.
> 
> In fact, _any_ aligned 16 bytes of plaintext in the conversation that
> are known, or that are in a guessable range, represent a
> plaintext/ciphertext pair if either of the following are true:
> 
>1) You sent the IV in the clear
>2) You used CBC mode
> 
> Of the modes I know (CBC, CTR, GCM, et. al.), the only one that does not
> freely give up such plaintext/ciphertext pairs is OCB.

according to http://en.wikipedia.org/wiki/Padding_(cryptography) , most 
protocols 
only talk about padding at the end of the cleartext before encryption.
now, how about adding some random at the beginning of the cleartext, say, 2.5 
times
the block size, that is 40 bytes for the example above, of random stuff before 
the 
interesting text appears ?

- Raphael

smime.p7s
Description: S/MIME cryptographic signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Introducing strangers. Was: Thoughts about keys

2013-09-11 Thread Guido Witmond
On 09/11/13 10:43, Eugen Leitl wrote:
> On Tue, Sep 10, 2013 at 09:01:49PM +0200, Guido Witmond wrote:
> 
>> My scheme does the opposite. It allows *total strangers* to
>> exchange keys securely over the internet.
> 
> With a FOAF routing scheme with just 3 degrees of separation there
> are not that many strangers left.

How do you meet people outside your circle of friends?

How do you stay anonymous? With FOAF, you have a single identity for it
to work. I offer people many different identities. But all of them are
protected, and all communication encrypted.

That's what my protocol addresses. To introduce new people to one
another, securely. You might not know the person but you are sure that
your private message is encrypted and can only be read by that person.

Of course, as it's a stranger, you don't trust them with your secrets.

For example, to let people from this mailing list send encrypted mail to
each other, without worrying about the keys. The protocol has already
taken care of that. No fingerprint checking. No web of trust validation.


> If you add opportunistic encryption at a low transport layer, plus
> additional layers on top of you've protected the bulk of traffic.

I don't just want to encrypt the bulk, I want to encrypt everything, all
the time. It makes Tor traffic much more hidden.


There is more

The local CA (one for each website) signs both the server and client
certificates. The client only identifies itself to the server after it
has recognized the server certificate. This blocks phishing attempts to
web sites (only a small TOFU risk remains). And that can be mitigated
with a proper dose of Certificate Transparency.

Kind regards, Guido Witmond,


Please see the site for more details:
http://eccentric-authentication.org/




signature.asc
Description: OpenPGP digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-11 Thread Perry E. Metzger
On Wed, 11 Sep 2013 09:04:56 +1000 "James A. Donald"
 wrote:
> On 2013-09-10 4:30 PM, ianG wrote:
> > The question of whether one could simulate a raw physical source
> > is tantalising.  I see diverse opinions as to whether it is
> > plausible, and thinking about it, I'm on the fence.
> 
> Let us consider that source of colored noise with which we are most 
> familiar:  The human voice.  Efforts to realistically simulate a
> human voice have not been very successful.  The most successful
> approach has been the ransom note approach, merging together a lot
> of small clips of an actual human voice.
> 
> A software simulated raw physical noise source would have to run 
> hundreds of thousands times faster.

I don't think this is true. Typically, the noise sources being used
in hardware RNGs are very simple physical processes like shot noise.
I think simulations of those are vastly simpler than simulations of
human voices. The mechanics of the vocal tract are extremely
complicated, while the equations describing the distribution of shot
noise and the like are dead simple.

That said, I think the obvious defense against this is in any case
hardware teardowns. My fear is that not enough of those happen, but
recent events may convince people that they are necessary.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] SPDZ, a practical protocol for Multi-Party Computation

2013-09-11 Thread Max Kington
On 11 Sep 2013 18:01, "Eugen Leitl"  wrote:
>
>
>
http://www.mathbulletin.com/research/Breakthrough_in_cryptography_could_result_in_more_secure_computing.asp
>
> Breakthrough in cryptography could result in more secure computing
> (9/10/2013)
>
> Tags: computer science, research, security, cryptography
>
> Nigel Smart, Professor of Cryptology
>
> New research to be presented at the 18th European Symposium on Research in
> Computer Security (ESORICS 2013) this week could result in a sea change in
> how to secure computations.
>
> The collaborative work between the University of Bristol and Aarhus
> University (Denmark) will be presented by Bristol PhD student Peter Scholl
> from the Department of Computer Science.
>
> The paper, entitled 'Practical covertly secure MPC for dishonest majority
-
> or: Breaking the SPDZ limits', builds upon earlier joint work between
Bristol
> and Aarhus and fills in the missing pieces of the jigsaw from the groups
> prior work that was presented at the CRYPTO conference in Santa Barbara
last
> year.
>
> The SPDZ protocol (pronounced "Speedz") is a co-development between
Bristol
> and Aarhus and provides the fastest protocol known to implement a
theoretical
> idea called "Multi-Party Computation".
>
> The idea behind Multi-Party Computation is that it should enable two or
more
> people to compute any function of their choosing on their secret inputs,
> without revealing their inputs to either party. One example is an
election,
> voters want their vote to be counted but they do not want their vote made
> public.
>
> The protocol developed by the universities turns Multi-Party Computation
from
> a theoretical tool into a practical reality. Using the SPDZ protocol the
team
> can now compute complex functions in a secure manner, enabling possible
> applications in the finance, drugs and chemical industries where
computation
> often needs to be performed on secret data.
>
> Nigel Smart, Professor of Cryptology in the University of Bristol's
> Department of Computer Science and leader on the project, said: "We have
> demonstrated our protocol to various groups and organisations across the
> world, and everyone is impressed by how fast we can actually perform
secure
> computations.
>
> "Only a few years ago such a theoretical idea becoming reality was
considered
> Alice in Wonderland style over ambitious hope. However, we in Bristol
> realised around five years ago that a number of advances in different
areas
> would enable the pipe dream to be achieved. It is great that we have been
> able to demonstrate our foresight was correct."
>
> The University of Bristol is now starting to consider commercialising the
> protocol via a company Dyadic Security Limited, co-founded by Professor
Smart
> and Professor Yehuda Lindell from Bar-Ilan University in Israel.

A colleague is looking into this venture. I gave him a synopsis of their
additions to SPDZ. There is a white paper describing their technology at
their website which talks about the other two related protocols, Yao and
Tiny-OT.

One interesting use that occurred to me was the ability to split the two
nodes in their implementation across jurisdictions. Especially those who
are unlikely to ever collaborate. That giving you an advantage over a
typical HSM which could live in a jurisdiction that could be seized.

The wp and associated bibliography is available at
http://www.dyadicsec.com/SiteAssets/resources1/DyadicWhitePaper.pdf

Max

>
> Note: This story has been adapted from a news release issued by the
> University of Bristol
>
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread Alexandre Anzala-Yamajako
2013/9/11 William Allen Simpson 

> It bugs me that so many of the input words are mostly zero.  Using the
> TLS Sequence Number for the nonce is certainly going to be mostly zero
> bits.  And the block counter is almost all zero bits, as you note,
>
>(In the case of the TLS, limits on the plaintext size mean that the
>first counter word will never overflow in practice.)
>
> [...]
>


> In my PPP ChaCha variant of this that I started several months ago, the
> nonce input words were replaced with my usual CBCS formulation.  That is,
>invert the lower 32-bits of the sequence number,
>xor with the upper 32-bits,
>add (mod 2**64) both with a 64-bit secret IV,
>count the bits, and
>variably rotate.
> [...]
>

Chacha20 being  a stream cipher, the only requirement we have on the ICV is
that it doesn't repeat isn't ?
This means that if there's a problem with setting 'mostly zeroed out' ICV
for Chacha20 we shouldn't use it at all period.
As far as your proposition is concerned, the performance penalty seems to
largely depend on the target platform. Wouldn't using the same set of
operations as Chacha prevent an unexpected performance drop in case of lots
of short messages ?

Cheers
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Thoughts about keys

2013-09-11 Thread Eugen Leitl
On Tue, Sep 10, 2013 at 09:01:49PM +0200, Guido Witmond wrote:

> My scheme does the opposite. It allows *total strangers* to exchange
> keys securely over the internet.

With a FOAF routing scheme with just 3 degrees of separation
there are not that many strangers left.

If you add opportunistic encryption at a low transport
layer, plus additional layers on top of you've protected
the bulk of traffic.


signature.asc
Description: Digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Bill Stewart

At 11:33 AM 9/6/2013, Peter Fairbrother wrote:
However, while the case for forward secrecy is easy to make, 
implementing it may be a little dangerous - if NSA have broken ECDH then

using it only gives them plaintext they maybe didn't have before.


I thought the normal operating mode for PFS is that there's an 
initial session key exchange (typically RSA) and authentication,
which is used to set up an encrypted session, and within that session 
there's a DH or ECDH key exchange to set up an ephemeral session key,

and then that session key is used for the rest of the session.
If so, even if the NSA has broken ECDH, they presumably need to see 
both Alice and Bob's keyparts to use their break,
which they can only do if they've cracked the outer session (possibly 
after the fact.)
So you're not going to leak any additional plaintext by doing ECDH 
compared to sending the same plaintext without it.


One point which has been mentioned, but perhaps not emphasised 
enough - if NSA have a secret backdoor into the main NIST ECC 
curves, then even if the fact of the backdoor was exposed - the 
method is pretty well known - without the secret constants no-one 
_else_ could break ECC.
So NSA could advocate the widespread use of ECC while still 
fulfilling their mission of protecting US gubbmint communications 
from enemies foreign and domestic. Just not from themselves.


Yep.  It's definitely the fun kind of backdoor to use.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] About those fingerprints ...

2013-09-11 Thread Andrew W. Donoho
Gentlefolk,



Fingerprint scanners have shipped on laptops and phones for years.

Yesterday, Apple made the bold, unaudited claim that it will never save 
the fingerprint data outside of the A7 chip.

Why should we trust Cook & Co.? They are subject to the laws of the 
land and will properly respond to lawful subpoenas. What are they doing to 
ensure the user's confidence that they cannot spread my fingerprint data to the 
cloud? (POI frequently have fingerprints on file. Finding out which phone is 
used by whom when you have fingerprint data is a Big Data query away.)

These questions also apply to things like keychain storage. Who has 
audited in a public fashion that Apple actually keeps keychains secure? How do 
we know whether Apple has perverted under secret court order the common crypto 
and other libraries in every phone and iPad? iOS 7 supports keychain storage in 
iCloud. Why should we trust Apple to keep our keys safe there? Where is the 
audit of their claims?

Why should we trust Cook & Co. without verifying their claims? 

IOW, where is the culture of public audit around security? Why did we 
ever trust the Canadian company RIM with our email without a public audit? Why 
do we trust Apple, MS, Google and others?

The culture of secrecy around the security stack inside popular OSes 
needs to stop. (I am proposing "after the fact" audits of shipping OSes. They 
should never be an impediment to any organization shipping software in a timely 
fashion.) Sunlight on the libraries being used is the best disinfectant for 
security concerns.

President Reagan had it right: "Trust but verify." Why should we trust 
Apple? Because their executives said so in a video? We need something stronger.



Anon,
Andrew

P.S.All you Android fanboys know how to globally replace Apple above with 
Google/Samsung.


Andrew W. Donoho
Donoho Design Group, L.L.C.
a...@ddg.com, +1 (512) 750-7596, twitter.com/adonoho

Download Retweever here: 

No risk, no art.
No art, no reward.
-- Seth Godin



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread Adam Langley
[attempt two, because I bounced off the mailing list the first time.]

On Tue, Sep 10, 2013 at 9:35 PM, William Allen Simpson
 wrote:
>ChaCha20 is run with the given key and nonce and with the two counter
>words set to zero.  The first 32 bytes of the 64 byte output are
>saved to become the one-time key for Poly1305.  The remainder of the
>output is discarded.
>
> Why generate the ICV key this way, instead of using a longer key blob
> from TLS and dividing it?  Is there a related-key attack?

The keying material from the TLS handshake is per-session information.
However, for a polynomial MAC, a unique key is needed per-record and
must be secret. Using stream cipher output as MAC key material is a
trick taken from [1], although it is likely to have more history than
that. (As another example, UMAC/VMAC runs AES-CTR with a separate key
to generate the per-record keys, as did Poly1305 in its original
paper.)

>Authenticated decryption is largely the reverse of the encryption
>process: the Poly1305 key is generated and the authentication tag
>calculated.  The calculated tag is compared against the final 16
>bytes of the authenticated ciphertext in constant time.  If they
>match, the remaining ciphertext is decrypted to produce the
>plaintext.
>
> If AEAD, aren't the ICV and cipher text generated in parallel?  So how do
> you check the ICV first, then decipher?

The Poly1305 key (ICV in your terms?) is taken from a prefix of the
ChaCha20 stream output. Thus the decryption proceeds as:

1) Generate one block of ChaCha20 keystream and use the first 32 bytes
as a Poly1305 key.
2) Feed Poly1305 the additional data and ciphertext, with the length
prefixing as described in the draft.
3) Verify that the Poly1305 authenticator matches the value in the
received record. If not, the record can be rejected immediately.
4) Run ChaCha20, starting with a counter value of one, to decrypt the
ciphertext.

An alternative implementation is possible where ChaCha20 is run in one
go on a buffer that consists of 64 zeros followed by the ciphertext.
The advantage of this is that it may be faster because the ChaCha20
blocks can be pipelined. The disadvantage is that it may need memory
copies to setup the input buffer correctly. A moot advantage, in the
case of TLS, of the steps that I outlined is that forgeries are
rejected faster.

> Needs a bit more implementation details.  I assume there's an
> implementation in the works.  (Always helps define things with
> something concrete.)

I currently have Chrome talking to OpenSSL, although the code needs
cleanup of course.

[1] http://cr.yp.to/highspeed/naclcrypto-20090310.pdf


Cheers

AGL
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Evaluating draft-agl-tls-chacha20poly1305

2013-09-11 Thread Adam Langley
On Tue, Sep 10, 2013 at 10:59 PM, William Allen Simpson
 wrote:
> I suggest:
>
>ChaCha20 is run with the given key and sequence number nonce and with
>
>the two counter words set to zero.  The first 32 bytes of the 64 byte
>output are saved to become the one-time key for Poly1305.  The next 8
>bytes of the output are saved to become the per-record input nonce
>for this ChaCha20 TLS record.
>
> Or you could use 16 bytes, and cover all the input fields  There's no
> reason the counter part has to start at 1.
>
> Of course, this depends on not having a related-key attack, as mentioned
> in my previous messages

It is the case that most of the bottom row bits will be zero. However,
ChaCha20 is assumed to be secure at a 256-bit security level when used
as designed, with the bottom row being counters. If ChaCha/Salsa were
not secure in this formulation then I think they would have to be
abandoned completely.

Nobody worries that AES-CTR is weak when the counter starts at zero, right?

Taking 8 bytes from the initial block and using it as the nonce for
the plaintext encryption would mean that there would be a ~50% chance
of a collision after 2^32 blocks. This issue affects AES-GCM, which is
why the sequence number is used here.

Using 16 bytes from the initial block as the full bottom row would
work, but it still assumes that we're working around a broken cipher
and it prohibits implementations which pipeline all the ChaCha blocks,
including the initial one. That may be usefully faster, although it's
not the implementation path that I've taken so far.

There is an alternative formulation of Salsa/ChaCha that is designed
for random nonces, rather than counters: XSalsa/XChaCha. However,
since we have a sequence number already in TLS I've not used it.


Cheers

AGL
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography