Re: [Cryptography] funding Tor development

2013-10-17 Thread Dave Howe
On 14/10/2013 14:36, Eugen Leitl wrote:
 Guys, in order to minimize Tor Project's dependance on
 federal funding and/or increase what they can do it
 would be great to have some additional funding ~10 kUSD/month.
I would say what is needed is not one source at $10K/month but 10K
sources at $1/month.

A single source of funding is *always* a single source of control.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread Adam Back

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit output. 
This weakens the proposed SHA3-256 relative to SHA256 in preimage

resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.  


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard assumption. 
It will probably have bad-interactions with many existing KDF, MAC,

merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.

If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256 preimage
every 10mins (1.7petaHash/sec network hashrate).

Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.

Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?) And generally where
does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-14 Thread Nicolas Rachinsky
* John Denker j...@av8n.com [2013-10-10 17:13 -0700]:
 *) Each server should publish a public key for /dev/null so that
  users can send cover traffic upstream to the server, without
  worrying that it might waste downstream bandwidth.
 
  This is crucial for deniabililty:  If the rubber-hose guy accuses
  me of replying to ABC during the XYZ crisis, I can just shrug and 
  say it was cover traffic.

If the server deletes cover traffic, the nsa just needs to subscribe.
Then the messages which you sent but which were not delivered via the
list are cover traffic.

Nicolas

-- 
http://www.rachinsky.de/nicolas
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread John Kelsey
Adam,

I guess I should preface this by saying I am speaking only for myself.  That's 
always true here--it's why I'm using my personal email address.  But in 
particular, right now, I'm not *allowed* to work.  But just speaking my own 
personal take on things

We go pretty *overwhelming* feedback in this direction in the last three weeks. 
 (For the previous several months, we got almost no feedback about it at all, 
despite giving presentations and posting stuff on hash forum about our plans.). 
 But since we're shut down right now, we can't actually make any decisions or 
changes.  This is really frustrating on all kinds of levels.

Personally, I have looked at the technical arguments against the change and I 
don't really find any of them very convincing, for reasons I described at some 
length on the hash forum list, and that the Keccak designers also laid out in 
their post.  The core of that is that an attacker who can't do 2^{128} work 
can't do anything at all to SHA3 with a 256 bit capacity that he couldn't also 
do to SHA3 with a 512 bit capacity, including finding preimages.  

But there's pretty much zero chance that we're going to put a standard out that 
most of the crypto community is uncomfortable with.  The normal process for a 
FIPS is that we would put out a draft and get 60 or 90 days of public comments. 
 As long as this issue is on the table, it's pretty obvious what the public 
comments would all be about.  

The place to go for current comments, if you think more are necessary, is the 
hash forum list.  The mailing list is still working, but I think both the 
archives and the process of being added to the list are frozen thanks to the 
shutdown.  I haven't looked at the hash forum since we shut down, so when we 
get back there will be a flood of comments there.  The last I saw, the Keccak 
designers had their own proposal for changing what we put into the FIPS, but I 
don't know what people think about their proposal. 

--John, definitely speaking only for myself
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.

2013-10-14 Thread Jerry Leichter
On Oct 13, 2013, at 1:04 PM, Ray Dillinger wrote:
 This is despite meeting (for some inscrutable definition of meeting)
 FIPS 140-2 Level 2 and Common Criteria standards.  These standards
 require steps that were clearly not done here.  Yet, validation
 certificates were issued.
 
 This is a misunderstanding of the CC certification and FIPS validation 
 processes:
 
 the certificates were issued *under the condition* that the software/system 
 built on it uses/implements the RNG tests mandated. The software didn't, 
 invalidating the results of the certifications.
 
 Either way, it boils down to tests were supposed to be done or conditions
 were supposed to be met, and producing the darn cards with those 
 certifications
 asserted amounts to stating outright that they were, and yet they were not.
 
 All you're saying here is that the certifying agencies are not the ones
 stating outright that the tests were done.
How could they?  The certification has to stop at some point; it can't trace 
the systems all the way to end users.  What was certified as a box that would 
work a certain way given certain conditions.  The box was used in a different 
way.  Why is it surprising that the certification was useless?  Let's consider 
a simple encryption box:  Key goes in top, cleartext goes in left; ciphertext 
comes out right.  There's an implicit assumption that you don't simply discard 
the ciphertext and send the plaintext on to the next subsystem in line.  No 
certification can possibly check that; or that, say, you don't post all your 
keys on your website immediately after generating them.

  I can accept that, but it does
 not change the situation or result, except perhaps in terms of the placement
 of blame. I *still* hope they bill the people responsible for doing the tests
 on the first generation of cards for the cost of their replacement.
That depends on what they were supposed to test, and whether they did test that 
correctly.  A FIPS/Common Criteria Certification is handed a box implementing 
the protocol and a whole bunch of paperwork describing how it's designed, how 
it works internally, and how it's intended to be used.  If it passes, what 
passes it the exact design certified, used as described.  There are way too 
many possible system built out of certified modules for it to be reasonable to 
expect the certification to encompass them all.

I will remark that, having been involved in one certification effort, I think 
they offer little, especially for software - they get at some reasonable issues 
for hardware designs.  Still, we don't currently have much of anything better.  
Hundreds of eyeballs may have been on the Linux code, but we still ended up 
fielding a system with a completely crippled RNG and not noticing for months.  
Still, if you expect the impossible from a process, you make any improvement 
impossible.  Formal verification, where possible, can be very powerful - but it 
will also have to focus on some well-defined subsystem, and all the effort will 
be wasted if the subsystem is used in a way that doesn't meet the necessary 
constraints.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread ianG

On 14/10/13 17:51 PM, Adam Back wrote:

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit
output. This weakens the proposed SHA3-256 relative to SHA256 in preimage
resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard
assumption. It will probably have bad-interactions with many existing
KDF, MAC,
merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.



I tend to look at it differently.  There are ephemeral uses and there 
are long term uses.  For ephemeral uses (like HMACs) then 128 bit 
protection is fine.


For long term uses, one should not sign (hash) what the other side 
presents (put in a nonce) and one should always keep what is signed 
around (or otherwise neuter a hash failure).  Etc.  Either way, one 
wants here a bit longer protection for the long term hash.


That 'time' axis is how I look at it.  Simplistic or simple?

Alternatively, there is the hash cryptographer's outlook, which tends to 
differentiate collisions, preimages, 2nd preimages and lookbacks.


From my perspective the simpler statement of SHA3-256 having 128 bit 
protection across the board is interesting, perhaps it is OK?




If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256
preimage
every 10mins (1.7petaHash/sec network hashrate).



I might be able to differentiate the preimage / collision / 2nd pi stuff 
here if I thought about if for a long time ... but even if I could, I 
would have no confidence that I'd got it right.  Or, more importantly, 
my design gets it right in the future.


And as we're dealing with money, I'd *want to get it right*.  I'd 
actually be somewhat happier if the hash had a clear number of 128.




Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why
design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.


Um.  Seems like this argument only works if people drop in SHA3 without 
being aware of the subtle switch in preimage protection, *and* they 
designed for it earlier on.  For my money, let 'em hang.



Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?)



Use SHA3-512 or SHA3-384?

What is the preimage protection of SHA3-512 when truncated to 256?  It 
seems that SHA3-384 still gets 256.


 And generally where

does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?



For now, refer to Congress of the USA, it's in Washington DC. 
Hopefully, it'll be closed soon too...




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread Dan McDonald
On Tue, Oct 15, 2013 at 12:35:13AM -, d...@deadhat.com wrote:
 http://eprint.iacr.org/2013/338.pdf

*LINUX* /dev/random is not robust, so claims the paper.

I wonder how various *BSDs or the Solarish family (Illumos, Oracle Solaris)
hold up under similar scrutiny?

Linux is big, but it is not everything.

Dan
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread John Gilmore
 http://eprint.iacr.org/2013/338.pdf

I'll be the first to admit that I don't understand this paper.  I'm
just an engineer, not a mathematician.  But it looks to me like the
authors are academics, who create an imaginary construction method for
a random number generator, then prove that /dev/random is not the same
as their method, and then suggest that /dev/random be revised to use
their method, and then show how much faster their method is.  All in
all it seems to be a pitch for their method, not a serious critique of
/dev/random.

They labeled one of their construction methods robustness, but it
doesn't mean what you think the word means.  It's defined by a mess of
greek letters like this:

  Theorem 2. Let n  m, , γ ∗ be integers. Assume that G :
  {0, 1}m → {0, 1}n+ is a deterministic (t, εprg )- pseudorandom
  generator. Let G = (setup, refresh, next) be defined as above. Then
  G is a ((t , qD , qR , qS ), γ ∗ , ε)- 2 robust PRNG with
  input where t ≈ t, ε = qR (2εprg +qD εext +2−n+1 )
  as long as γ ∗ ≥ m+2 log(1/εext )+1, n ≥ m + 2
  log(1/εext ) + log(qD ) + 1.

Yeah, what he said!

Nowhere do they seem to show that /dev/random is actually insecure.
What they seem to show is that it does not meet the robustness
criterion that they arbitrarily picked for their own construction.

Their key test is on pages 23-24, and begins with After a state
compromise, A (the adversary) knows all parameters.  The comparison
STARTS with the idea that the enemy has figured out all of the hidden
internal state of /dev/random.  Then the weakness they point out seems
to be that in some cases of new, incoming randomness with
mis-estimated entropy, /dev/random doesn't necessarily recover over
time from having had its entire internal state somehow compromised.

This is not very close to what /dev/random is not robust means in
English.  Nor is it close to what others might assume the paper
claims, e.g. /dev/random is not safe to use.

John

PS: After attending a few crypto conferences, I realized that
academic pressures tend to encourage people to write incomprehensible
papers, apparently because if nobody reading their paper can
understand it, then they look like geniuses.  But when presenting at
a conference, if nobody in the crowd can understand their slides, then
they look like idiots.  So the key to understanding somebody's
incomprehensible paper is to read their slides and watch their talk,
80% of which is often explanations of the background needed to
understand the gibberish notations they invented in the paper.  I
haven't seen either the slides or the talk relating to this paper.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread James A. Donald

On 2013-10-15 10:35, d...@deadhat.com wrote:

http://eprint.iacr.org/2013/338.pdf


No kidding.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-13 Thread Ray Dillinger
On 10/11/2013 11:22 AM, Jerry Leichter wrote:

 1.  Brute force.  No public key-stretching algorithm can help, since the 
 attacker 
 will brute-force the k's, computing the corresponding K's as he goes.

There is a completely impractical solution for this which is applicable
in a very few ridiculously constrained situations.  Brute force can
be countered, in very limited circumstances, by brute bandwidth.

You have to use random salt sufficient to ensure that all possible
decryptions of messages transmitted using the insufficient key or
insecure cipher are equally valid.

Unfortunately, this requirement is cumulative for *ALL* messages that
you encrypt using the key, and becomes flatly impossible if the total
amount of ciphertext you're trying to protect with that key is greater
than a very few bits.

So, if you have a codebook that allows you to transmit one of 128 pre-
selected messages (7 bits each) you could use a very short key or an
insecure cipher about five times, attaching (2^35)/5 bits of salt to
each message, to achieve security against brute-force attacks.  At
that point your opponent sees all possible decryptions as equally
likely with at least one possible key that gives each of the possible
total combinations of decryptions (approximately; about 1/(2^k) of the
total number of possible decryptions will be left out, where k is the
size of your actual too-short key).

The bandwidth required is utterly ridiculous, but you can get
security on a few very short messages, assuming there's no identifiable
pattern in your salt.

Unfortunately, you cannot use this to leverage secure transmission of
keys, since whatever key larger than the initial key you transmit
using this scheme, once your opponent has ciphertext transmitted
using the longer key, the brute-force method against the possibilities
for your initial short key becomes applicable to that ciphertext.

Bear

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-13 Thread Christian Huitema
 Without doing any key management or requiring some kind of reliable
identity or memory of previous sessions, the best we can do in the inner 
 protocol is an ephemeral Diffie-Hellman, so suppose we do this:  

 a.  Generate random a and send aG on curve P256

 b.  Generate random b and send bG on curve P256

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to
generate an AES key for messages in each direction.
 
 d.  Each side keeps a sequence number to use as a nonce.  Both sides use
AES-CCM with their sequence number and their sending key, and keep 
 track of the sequence number of the most recent message received from the
other side.  

 ...

 Thoughts?

We should get Stev Knowles explain the skeeter and bubba TCP options.
From private conversations I understand that the options where doing
pretty much what you describe:  use Diffie Hellman in the TCP exchange to
negotiate an encryption key for the TCP session. 

That would actually be a very neat thing. I don't believe using TCP options
would be practical today, too many firewalls would filter them. But the same
results would be achieved with a zero-knowledge version of TLS. That would
make session encrypted by default.

Of course, any zero-knowledge protocol can be vulnerable to
man-in-the-middle attacks. But the applications can protect against that
with an end to end exchange. For example, if there is a shared secret, even
a lowly password, the application protocol can embed verification of the
zero-knowledge session key in the password verification, by combining the
session key with either the challenge or the response in a basic
challenge-response protocol. 

That would be pretty neat, zero-knowledge TLS, then use the password
exchange to mutually authenticate server and client while protecting against
MITM. Pretty much any site could deploy that.

-- Christian Huitema


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread James A. Donald

On 2013-10-11 15:48, ianG wrote:
Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


The problem is that layering creates round trips, and as cpus get ever 
faster, and pipes ever fatter, round trips become a bigger an bigger 
problem.  Legend has it that each additional round trip decreases usage 
of your web site by twenty percent, though I am unaware of any evidence 
on this.





(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly. Back in those 
days we -- they -- didn't know enough to do an integrated security 
protocol.  But these days we do, I'd suggest, or we know enough to 
give it a try.)


TCP provides eight bits of protocol negotiation, which results in 
multiple layers of protocol negotiation on top.


Ideally, we should extend the protocol negotiation and do crypto 
negotiation at the same time.


But, I would like to see some research on how evil round trips really are.

I notice that bank web pages take an unholy long time to come up, 
probably because one secure we page loads another, and that then loads a 
script, etc.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] SSH small RSA public exponent

2013-10-12 Thread Peter Gutmann
Tim Hudson t...@cryptsoft.com writes:

Does anyone recollect the history behind and the implications of the (open)
SSH choice of 35 as a hard-wired public exponent?

/* OpenSSH versions up to 5.4 (released in 2010) hardcoded e = 35, which is
   both a suboptimal exponent (it's less efficient that a safer value like 257
   or F4) and non-prime.  The reason for this was that the original SSH used
   an e relatively prime to (p-1)(q-1), choosing odd (in both senses of the
   word) numbers  31.  33 or 35 probably ended up being chosen frequently so
   it was hardcoded into OpenSSH for cargo-cult reasons, finally being fixed
   after more than a decade to use F4.  In order to use pre-5.4 OpenSSH keys
   that use this odd value we make a special-case exception for SSH use */

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-12 Thread William Allen Simpson

On 10/11/13 7:34 PM, Peter Gutmann wrote:

Phillip Hallam-Baker hal...@gmail.com writes:


Quick question, anyone got a good scheme for key stretching?


http://lmgtfy.com/?q=hkdfl=1


Yeah, that's a weaker simplification of the method I've always
advocated, stopping the hash function before the final
MD-strengthing and repeating the input, only doing the
MD-strengthening for the last step for each key.  I used this in
many of my specifications.

In essence, the MD-strengthening counter is the same as the 0xnn
counter they used, although longer and stronger.

This assures there are no releated key attacks, as the internal
chaining variables aren't exposed.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Ben Laurie
On 10 October 2013 17:06, John Kelsey crypto@gmail.com wrote:
 Just thinking out loud

 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.

 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:

 a.  Generate random a and send aG on curve P256

 b.  Generate random b and send bG on curve P256

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.

 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.

 Thoughts?

AIUI, you're trying to make it so that only active attacks work on the
combined protocol, whereas passive attacks might work on the outer
protocol. In order to achieve this, you assume that your proposed
inner protocol is not vulnerable to passive attacks (I assume the
outer protocol also thinks this is true). Why should we believe the
inner protocol is any better than the outer one in this respect?
Particularly since you're using tainted algorithms ;-).
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Jerry Leichter
On Oct 11, 2013, at 11:09 PM, James A. Donald wrote:
 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.
 
 The problem is that layering creates round trips, and as cpus get ever 
 faster, and pipes ever fatter, round trips become a bigger an bigger problem. 
  Legend has it that each additional round trip decreases usage of your web 
 site by twenty percent, though I am unaware of any evidence on this.
The research is on time delays, which you could easily enough convert to round 
trips.  The numbers are nowhere near 20%, but are significant if you have many 
users:  http://googleresearch.blogspot.com/2009/06/speed-matters.html

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-12 Thread Stephen Farrell

If someone wants to try organise a pgp key signing party at
the Vancouver IETF next month let me know and I can organise a
room/time. That's tended not to happen since Ted and Jeff
don't come along but we could re-start 'em if there's interest.

S.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread John Kelsey
On Oct 12, 2013, at 6:51 AM, Ben Laurie b...@links.org wrote:
...
 AIUI, you're trying to make it so that only active attacks work on the
 combined protocol, whereas passive attacks might work on the outer
 protocol. In order to achieve this, you assume that your proposed
 inner protocol is not vulnerable to passive attacks (I assume the
 outer protocol also thinks this is true). Why should we believe the
 inner protocol is any better than the outer one in this respect?

The point is, we don't know how to make protocols that really are reliably 
secure against future attacks.  If we did, we'd just do that. 


My hope is that if we layer two of our best attempts at secure protocols on top 
of one another, then we will get security because the attacks will be hard to 
get through the composed protocols.  So maybe my protocol (or whatever inner 
protocol ends up being selected) isn't secure against everything, but as long 
as its weaknesses are covered up by the outer protocol, we still get a secure 
final result.  

One requirement for this is that the inner protocol must not introduce new 
weaknesses.  I think that means it must not:

a.  Leak information about its plaintexts in its timing, error messages, or 
ciphertext sizes.  

b.  Introduce ambiguities about how the plaintext is to be decrypted that could 
mess up the outer protocol's authentication.  

I think we can accomplish (a) by not compressing the plaintext before 
processing it, by using crypto primitives that don't leak plaintext data in 
their timing, and by having the only error message that can ever be generated 
from the inner protocol be essentially a MAC failure or an out-of-sequence 
error.  

I think (b) is pretty easy to accomplish with standard crypto, but maybe I'm 
missing something.  

...
 Particularly since you're using tainted algorithms ;-).

If using AES or P256 are the weak points in the protocol, that is a big win.  
Right now, we aren't getting anywhere close to that.  And there's no reason 
either AES or P256 have to be used--I'm just looking for a simple, lightweight 
way to get as much security as possible inside some other protocol.  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-12 Thread Joshua Marpet
I am one of the organizers of Security BSides Delaware, otherwise known as
BSidesDE.  We have already discussed having a key signing party, but if
there is any interest, I'd love for any of you to be there, and potentially
run it.  Check out bsidesdelaware.com for dates, locations, and such.

It's an academic environment, and we will have several hundred people
there, from college students, to business, to infosec professionals.

And we're only a couple of hours from the NSA!!  ;)

Nov 8 and 9th, Wilmington, DE.

Any interest?

Joshua Marpet


On Sat, Oct 12, 2013 at 8:00 AM, Stephen Farrell
stephen.farr...@cs.tcd.iewrote:


 If someone wants to try organise a pgp key signing party at
 the Vancouver IETF next month let me know and I can organise a
 room/time. That's tended not to happen since Ted and Jeff
 don't come along but we could re-start 'em if there's interest.

 S.
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography




-- 

*Joshua A. Marpet*

Managing Principal

*GuardedRisk*

**

*Before the Breach **and **After The Incident!*

*
*

1-855-23G-RISK (855-234-7475)


Cell: (908) 916-7764

joshua.mar...@guardedrisk.com

http://www.GuardedRisk.com

** **

*This communication (including any attachments) contains privileged and
confidential information from GuardedRisk which is intended for a specific
individual and purpose, and is protected by law.  If you are not the
intended recipient, you may not read, copy, distribute, or use this
information, and no privilege has been waived by your inadvertent receipt.
Furthermore, you should delete this communication and / or shred the
materials and any attachments and are hereby notified that any disclosure,
copying, or distribution of this communication, or the taking of any action
based on it, is strictly prohibited.*
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] ADMIN: Re: Iran and murder

2013-10-11 Thread Tamzen Cannoy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


I think this thread has run its course and is sufficiently off topic for this 
list, so I am declaring it closed. 

Thank you

Tamzen




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSWDC65/HCKu9Iqw4RAk3YAKCxoX20Ofj4FFGUDxD8x3GVgpSd2gCg38TQ
iCjYvp3O1v7rnjUFil6bDrM=
=WWIe
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread John Kelsey
On Oct 11, 2013, at 1:48 AM, ianG i...@iang.org wrote:

...
 What's your goal?  I would say you could do this if the goal was ultimate 
 security.  But for most purposes this is overkill (and I'd include online 
 banking, etc, in that).

We were talking about how hard it is to solve crypto protocol problems by 
getting the protocol right the first time, so we don't end up with fielded 
stuff that's weak but can't practically be fixed.  One approach I can see to 
this is to have multiple layers of crypto protocols that are as independent as 
possible in security terms.  The hope is that flaws in one protocol will 
usually not get through the other layer, and so they won't lead to practical 
security flaws.  

Actually getting the outer protocol right the first time would be better, but 
we haven't had great success with that so far. 

 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.

Maybe not, though I think a very lightweight version of the inner protocol adds 
only a few bits to the traffic used and a few AES encryptions to the workload.  
I suspect most applications would never notice the difference.  (Even the 
version with the ECDH key agreement step would probably not add noticable 
overhead for most applications.)  On the other hand, I have no idea if anyone 
would use this.  I'm still at the level of thinking what could be done to 
address this problem, not how would you sell this?  

 iang

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-11 Thread d.nix
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 10/10/2013 6:40 PM, grarpamp wrote:  On Thu, Oct 10, 2013 at 11:58
AM, R. Hirschfeld r...@unipay.nl wrote:
 To send a prism-proof email, encrypt it for your recipient and
 send it to irrefrangi...@mail.unipay.nl.  Don't include any
 information about
 
 To receive prism-proof email, subscribe to the irrefrangible
 mailing list at
 http://mail.unipay.nl/mailman/listinfo/irrefrangible/.  Use a
 
 This is the same as NNTP, but worse in that it's not distributed.
 

Is this not essentially alt.anonymous.messages, etc?

http://ritter.vg/blog-deanonymizing_amm.html
http://ritter.vg/blog-deanonymizing_amm_followup1.html

?

- --


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.20 (MingW32)

iQEcBAEBAgAGBQJSV6VAAAoJEDMbeBxcUNAekEcIAIYsHOI384C4RJfNdBcpD6NR
a40C4LTQOwPJV335zUWWHjc6+6ZlUwwHimk2IQebNcEflNJn55O7k3N4CS7i4qtp
A9dxDxilCrSpwwwPnsso5bfrA2/PEVfux1yzCZ4lmf39xwl/y/0PyBO7DB8CMQcA
YatmYtzFAWktLYZSDuMIJPnzSKuaOnEQSiOXwCCTwgSIo3QRoNP+01JprroT168e
mylxsVP2R46YIIWx6uWl+oU2oflaa3/r/nLdS2OCV99uZXmu8UlJAVNq222YwELn
yhvkasfkRHtE6AhK1t5y9c4dB9cz5v2hTKNFlaRVf0PyA59ZRu8EAoZnWcJCDrM=
=gsqL
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-11 Thread Eugen Leitl
On Thu, Oct 10, 2013 at 03:54:26PM -0400, John Kelsey wrote:

 Having a public bulletin board of posted emails, plus a protocol for
 anonymously finding the ones your key can decrypt, seems like a pretty decent
 architecture for prism-proof email.  The tricky bit of crypto is in making
 access to the bulletin board both efficient and private.  

This is what Bitmessage attempts to achieve, but it has issues.
Assuming these can be solved (a rather large if), and glue 
like https://bitmessage.ch/ is available to be run by end users
it could be quite useful.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Eugen Leitl
On Thu, Oct 10, 2013 at 04:24:19PM -0700, Glenn Willen wrote:

 I am going to be interested to hear what the rest of the list says about
 this, because this definitely contradicts what has been presented to me as
 'standard practice' for PGP use -- verifying identity using government issued
 ID, and completely ignoring personal knowledge.

This obviously ignores the threat model of official fake IDs.
This is not just academic for some users. 

Plus, if you're e.g. linking up with known friends in RetroShare
(which implements identities via PGP keys, and degrees of
trust (none/marginal/full) by signatures, and allows you to 
tune your co-operative variables (Anonymous routing/discovery/
forums/channels/use a direct source, if available) depending on 
the degree of trust.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 19:06 PM, John Kelsey wrote:

Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.

Thoughts?



What's your goal?  I would say you could do this if the goal was 
ultimate security.  But for most purposes this is overkill (and I'd 
include online banking, etc, in that).


Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly.  Back in those days 
we -- they -- didn't know enough to do an integrated security protocol. 
 But these days we do, I'd suggest, or we know enough to give it a try.)


iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Phillip Hallam-Baker
Reply to various,

Yes, the value in a given key signing is weak, in fact every link in the
web of trust is terribly weak.

However, if you notarize and publish the links in CT fashion then I can
show that they actually become very strong. I might not have good evidence
of John Gilmore's key at RSA 2001, but I could get very strong evidence
that someone signed a JG key at RSA 2001.

Which is actually quite a high bar since the attacker would haver to buy a
badge which is $2,000. Even if they were going to go anyway and it is a
sunk cost, they are rate limited.


The other attacks John raised are valid but I think they can be dealt with
by adequate design of the ceremony to ensure that it is transparent.

Now stack that information alongside other endorsements and we can arrive
at a pretty strong authentication mechanism.

The various mechanisms used to evaluate the trust can also be expressed in
the endorsement links.


What I am trying to solve here is the distance problem in Web o' trust. At
the moment it is pretty well impossible for me to have confidence in keys
for people who are ten degrees out. Yet I am pretty confident of the
accuracy of histories of what happened 300 years ago (within certain
limits).

It is pretty easy to fake a web of trust, I can do it on one computer, no
trouble. But if the web is grounded at just a few points to actual events
then it becomes very difficult to spoof.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Richard Outerbridge
On 2013-10-10 (283), at 19:24:19, Glenn Willen gwil...@nerdnet.org wrote:

 John,
 
 On Oct 10, 2013, at 2:31 PM, John Gilmore wrote:
 
 An important user experience point is that we should be teaching GPG
 users to only sign the keys of people who they personally know.

[]

 would be false and would undermine the strength of the web of trust.
 
 I am going to be interested to hear what the rest of the list says about 
 this, because this definitely contradicts what has been presented to me as 
 'standard practice' for PGP use -- verifying identity using government issued 
 ID, and completely ignoring personal knowledge.
 
 Do you have any insight into what proportion of PGP/GPG users mean their 
 signatures as personal knowledge (my preference and evidently yours), 
 versus government ID (my perception of the community standard best 
 practice), versus no verification in particular (my perception of the 
 actual common practice in many cases)?
 
 (In my ideal world, we'd have a machine readable way of indication what sort 
 of verification was performed. Signing policies, not being machine readable 
 or widely used, don't cover this well. There is space for key-value 
 annotations in signature packets, which could help with this if we 
 standardized on some.)
 
 Glenn Willen
 __

Surely to make it two factor it needs to be someone you know _and_ something 
they have? :-)
__outer

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-11 Thread Erik de Castro Lopo
grarpamp wrote:

 On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
  To send a prism-proof email, encrypt it for your recipient and send it
  to irrefrangi...@mail.unipay.nl.  Don't include any information about
 
  To receive prism-proof email, subscribe to the irrefrangible mailing
  list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/.  Use a
 
 This is the same as NNTP, but worse in that it's not distributed.

This scheme already exists on Usenet/NNTP as alt.anonymous.messages.
See the Google groups view here:

https://groups.google.com/forum/#!forum/alt.anonymous.messages

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 08:41 AM, Bill Frantz wrote:


We should try to characterize what a very long time is in years. :-)



Look at the produce life cycle for known crypto products.  We have some 
experience of this now.  Skype, SSL v2/3 - TLS 0/1/2, SSH 1 - 2, PGP 2 
- 5+.


As a starting point, I would suggest 10 years.

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 17:58 PM, Salz, Rich wrote:

TLS was designed to support multiple ciphersuites. Unfortunately this opened 
the door
to downgrade attacks, and transitioning to protocol versions that wouldn't do 
this was nontrivial.
The ciphersuites included all shared certain misfeatures, leading to the 
current situation.


On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.



That same regulator that asked for that capability is somewhat prominent 
in the current debacle.


Feature or bug?



Sometimes half a loaf is better than nothing.



A shortage of bread has been the inspiration for a few revolutions :)

iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-11 Thread Nico Williams
On Thu, Oct 10, 2013 at 04:22:50PM -0400, Jerry Leichter wrote:
 On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
  Very silly but trivial to implement so I went ahead and did so:
  
  To send a prism-proof email, encrypt it for your recipient and send it
  to irrefrangi...@mail.unipay.nl
 Nice!  I like it.

Me too.  I've been telling people that all PRISM will accomplish
regarding the bad guys is to get them to use dead drops, such as comment
posting on any of millions of blogs -- low bandwidth, undetectable.  The
technique in this thread makes the use of a dead drop obvious, and adds
significantly to the recipient's work load, but in exchange brings the
bandwidth up to more usable levels.

Either way the communicating peers must pre-agree a number of things --
a traffic analysis achilles point, but it's one-time vulnerability, and
chances are people who would communicate this way already have such
meetings.

 A couple of comments:
 
 1.  Obviously, this has scaling problems.  The interesting question is
 how to extend it while retaining the good properties.  If participants
 are willing to be identified to within 1/k of all the users of the
 system (a set which will itself remain hidden by the system), choosing
 one of k servers based on a hash of the recipient would work.  (A
 concerned recipient could, of course, check servers that he knows
 can't possibly have his mail.)  Can one do better?

Each server/list is a channel.  Pre-agree on channels or use hashes.  If
the latter then the hashes have to be of {sender, recipient}, else one
party has a lot of work to do, but then again, using just the sender or
just the recipient helps protect the other party against traffic
analysis.  Assuming there are millions of channels then maybe
something like

H({sender, truncate(H(recipient), log2(number-of-channels-to check))})

will do just fine.  And truncate(H(recipient, log2(num-channels))) can
be used for introduction purposes.

The number of servers/lists divides the total work to do to receive a
message.

 2.  The system provides complete security for recipients (all you can
 tell about a recipient is that he can potentially receive messages -
 though the design has to be careful so that a recipient doesn't, for
 example, release timing information depending on whether his
 decryption succeeded or not).  However, the protection is more limited
 for senders.  A sender can hide its activity by simply sending random
 messages, which of course no one will ever be able to decrypt.  Of
 course, that adds yet more load to the entire system.

But then the sender can't quite prove that they didn't send anything.
In a rubber hose attack this could be a problem.  This also applies to
recipients: they can be observed fetching messages, and they can be
observed expending power trying to find ones addressed to them.

Also, there's no DoS protection: flooding the lists with bogus messages
is a DoS on recipients.

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Zooko O'Whielacronx
I like the ideas, John.

The idea, and the protocol you sketched out, are a little reminiscent
of ZRTP ¹ and of tcpcrypt ². I think you can go one step further,
however, and make it *really* strong, which is to offer the higher
or outer layer a way to hook into the crypto from your inner layer.

This could be by the inner layer exporting a crypto value which the
outer layer enforces an authorization or authenticity requirement on,
as is done in ZRTP if the a=zrtp-hash is delivered through an
integrity-protected outer layer, or in tcpcrypt if the Session ID is
verified by the outer layer.

I think this is a case where a separation of concerns between layers
with a simple interface between them can have great payoff. The
lower/inner layer enforces confidentiality (encryption),
integrity, hopefully forward-secrecy, etc., and the outer layer
decides on policy: authorization, naming (which is often but not
necessarily used for authorization), etc. The interface between them
can be a simple cryptographic interface, for example the way it is
done in the two examples above.

I think the way that SSL combined transport layer security,
authorization, and identification was a terrible idea. I (and others)
have been saying all along that it was a bad idea, and I hope that the
related security disasters during the last two years have started
persuading more people to rethink it, too. I guess the designers of
SSL were simply following the lead of the original inventors of public
key cryptography, who delegated certain critical unsolved problems to
an underspecified Trusted Third Party. What a colossal, historic
mistake.

The foolscap project ³ by Brian Warner demonstrates that it is
possible to retrofit a nice abstraction layer onto SSL. The way that
it does this is that each server automatically creates a self-signed
certificate, the secure hash of that certificate is embedded into the
identifier pointing at that server, and the client requires the
server's public key match the certificate matching that hash. The fact
that this is a useful thing to do, and inconvenient and rare thing to
do with SSL, should give security architects food for thought.

So I have a few suggestions for you:

1. Go, go, go! The path your thoughts are taking seems fruitful. Just
design a really good inner layer of crypto, without worrying (for
now) about the vexing and subtle problems of authorization,
authentication, naming, Man-In-The-Middle-Attack and so on. For now.

2. Okay, but leave yourself an out, by defining a nice simple
cryptographic hook by which someone else who *has* solved those vexing
problems could extend the protection that they've gained to users of
your protocol.

3. Maybe study ZRTP and tcpcrypt for comparison. Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.

Regards,

Zooko

https://LeastAuthority.com ← verifiably end-to-end-encrypted storage

P.S. Another example that you and I should probably study is cjdns ⁴.
Despite its name, it is *not* a DNS-like thing. It is a
transport-layer thing. I know less about cjdns so I didn't cite it as
a good example above.

¹ https://en.wikipedia.org/wiki/ZRTP
² http://tcpcrypt.org/
³ http://foolscap.lothar.com/docs/using-foolscap.html
⁴ http://cjdns.info/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Tony Naggs
On 10 October 2013 22:31, John Gilmore g...@toad.com wrote:
 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?

 It's just a practice.  I agree that building a small amount of automation
 for key signing parties would improve the web of trust.

Do key signing parties even happen much anymore? The last time I saw
one advertised was around PGP 2.6!


 I am specifically thinking of ways that key signing parties might be made
 scalable so that it was possible for hundreds of thousands of people...

 An important user experience point is that we should be teaching GPG
 users to only sign the keys of people who they personally know.
 Having a signature that says, This person attended the RSA conference
 in October 2013 is not particularly useful.  (Such a signature could
 be generated by the conference organizers themselves, if they wanted
 to.)  Since the conference organizers -- and most other attendees --
 don't know what an attendee's real identity is, their signature on
 that identity is worthless anyway.

I can sign the public keys of people I personally know without a key
signing party. :-)

For many purposes I don't care about a person's official, legal
identity, but I do want to communicate with a particular persona.
For instance at DefCon or CCC I neither know or care whether someone
identifies themselves to me by their legal name or hacker handle, but
it is very useful to know  authenticate that they are in control of a
private PGP/GPG key in that name on a particular date.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-11 Thread John Kelsey
This is a job for a key derivation function or a cryptographic prng.  I would 
use CTR-DRBG from 800-90 with AES256.  Or the extract-then-expand KDF based on 
HMAC-SHA512.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Bill Frantz

On 10/11/13 at 10:32 AM, zoo...@gmail.com (Zooko O'Whielacronx) wrote:


Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.


Look at the E language sturdy refs, which are a lot like the 
Foolscap references. They are documented at www.erights.org.


Cheers - Bill

---
Bill Frantz| Truth and love must prevail  | Periwinkle
(408)356-8506  | over lies and hate.  | 16345 
Englewood Ave
www.pwpconsult.com |   - Vaclav Havel | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Joe Abley

On 2013-10-11, at 07:03, Tony Naggs tonyna...@gmail.com wrote:

 On 10 October 2013 22:31, John Gilmore g...@toad.com wrote:
 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?
 
 It's just a practice.  I agree that building a small amount of automation
 for key signing parties would improve the web of trust.
 
 Do key signing parties even happen much anymore? The last time I saw
 one advertised was around PGP 2.6!

The most recent key signing party I attended was five days ago (DNS-OARC 
meeting in Phoenix, AZ). I commonly have half a dozen opportunities to 
participate in key signing parties during a typical year's travel schedule to 
workshops, conferences and other meetings. This is not uncommon in the circles 
I work in (netops, dnsops).

My habit before signing anything is generally at least to have had a 
conversation with someone, observed their interactions with people I do know (I 
generally have worked with other people at the party). I'll check 
government-issued IDs, but I'm aware that I am not an expert in counterfeit 
passports and I never feel like that I am able to do a good job at it.

(I showed up to a key signing party at the IETF once with a New Zealand 
passport, a Canadian passport, a British passport, an expired Canadian 
permanent-resident card, three driving licences and a Canadian health card, and 
offered the bundle to anybody who cared to review them to make this easier for 
others. But that was mainly showing off.)

I have used key ceremonies to poison edges and nodes in the graph of trust 
following observations that particular individuals don't do a good enough job 
of this, or that (in some cases) they appear to have made signatures at an 
event where I was present and I know they were not. That's a useful adjunct to 
a key ceremony (I think) that many people ignore. The web of trust can also be 
a useful web of distrust.


Joe

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-11 Thread Jerry Leichter
On Oct 11, 2013, at 11:26 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
 Quick question, anyone got a good scheme for key stretching?
 
 I have this scheme for managing private keys that involves storing them as 
 encrypted PKCS#8 blobs in the cloud.
 
 AES128 seems a little on the weak side for this but there are (rare) 
 circumstances where a user is going to need to type in the key for recovery 
 purposes so I don't want more than 128 bits of key to type in (I am betting 
 that 128 bits is going to be sufficient to the end of Moore's law).
 
 
 So the answer is to use AES 256 and stretch the key, but how? I could just 
 repeat the key:
 
 K = k + k
 
 Related key attacks make me a little nervous though. Maybe:
The related key attacks out there require keys that differ in a couple of bits. 
 If k and k' aren't related, k+k and k'+k' won't be either.

 K = (k + 01234567) XOR SHA512 (k)
Let's step back a moment and think about attacks:

1.  Brute force.  No public key-stretching algorithm can help, since the 
attacker will brute-force the k's, computing the corresponding K's as he goes.
2.  Analytic attack against AES128 that doesn't extend, in general, to AES256.  
Without knowing the nature of the attack, it's impossible to estimate whether 
knowing that the key has some particular form would allow the attack to extend. 
If so ... what forms?
3.  Analytic attack against AES256.  A recognizable form for keys - e.g., k+k - 
might conceivably help, but it seems like a minor thing.

Realistically, k+k, or k padded with 0's, or SHA256(k), are probably equally 
strong except under any attacks specifically concocted to target them (e.g., 
suppose it turns out that there just happens to be an analytic attack against 
AES256 for keys with more than 3/4's of the bits equal to 0).

Since you're describing a situation in which performance is not an issue, you 
might as well use SHA256(k) - whitening the key can't hurt.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.

2013-10-11 Thread Wouter Slegers
Dear Ray,

On 2013-10-11, at 19:38 , Ray Dillinger b...@sonic.net wrote:
 This is despite meeting (for some inscrutable definition of meeting)
 FIPS 140-2 Level 2 and Common Criteria standards.  These standards
 require steps that were clearly not done here.  Yet, validation
 certificates were issued.
This is a misunderstanding of the CC certification and FIPS validation 
processes:
the certificates were issued *under the condition* that the software/system 
built on it uses/implements the RNG tests mandated. The software didn't, 
invalidating the results of the certifications.

At best the mandatory guidance is there because it was too difficult to prove 
that the smart card meets the criteria without it (typical example in the OS 
world: the administrator is assumed to be trusted, the typical example in smart 
card hardware: do the RNG tests!).
At worst the mandatory guidance is there because without it, the smart card 
would not have met the criteria (i.e. without following the guidance there is a 
vulnerability)
This is an example of the latter case. Most likely the software also hasn't 
implement the other requirements, leaving it somewhat to very vulnerable to the 
standard smart card attack such as side channel analysis and perturbation.

If the total (the smart card + software) would have been CC certified, this 
would have been checked as part of the composite certification.

(I've been in the smart card CC world for more than a decade. This kind of 
misunderstanding/misapplication is rare for the financial world thanks to 
EMVco, i.e. the credit card companies. It is also rare for European government 
organisations, as they know to contact the Dutch/French/German/UK agencies 
involved in these things. European ePassports for example are generally 
certified for the whole thing and a mistake in those of this order would be ... 
surprising and cause for some intense discussion in the smart card 
certification community. Newer parties into the smart card world tend to have 
to relearn the lessons again and again it seems.)

With kind regards,
Wouter Slegers
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Jeremy Stanley
On 2013-10-11 12:03:44 +0100 (+0100), Tony Naggs wrote:
 Do key signing parties even happen much anymore? The last time I saw
 one advertised was around PGP 2.6!
[...]

Within more active pockets of the global free software community
(where OpenPGP signatures are used to authenticate release
artifacts, security advisories, election ballots, access controls
and so on) key signing parties are an extremely common occurrence...
I'd say much more so now than a decade ago, as the community has
grown continually and developed an increasing need to be able to
recognize one another's output in a verifiable manner,
asynchronously, distributed over great distances and across
loosely-related subcommunities/projects.
-- 
{ PGP( 48F9961143495829 ); FINGER( fu...@cthulhu.yuggoth.org );
WWW( http://fungi.yuggoth.org/ ); IRC( fu...@irc.yuggoth.org#ccl );
WHOIS( STANL3-ARIN ); MUD( kin...@katarsis.mudpy.org:6669 ); }
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Trevor Perrin
On Fri, Oct 11, 2013 at 10:32 AM, Zooko O'Whielacronx zoo...@gmail.com wrote:
 I like the ideas, John.

 The idea, and the protocol you sketched out, are a little reminiscent
 of ZRTP ¹ and of tcpcrypt ². I think you can go one step further,
 however, and make it *really* strong, which is to offer the higher
 or outer layer a way to hook into the crypto from your inner layer.

 This could be by the inner layer exporting a crypto value which the
 outer layer enforces an authorization or authenticity requirement on,
 as is done in ZRTP if the a=zrtp-hash is delivered through an
 integrity-protected outer layer, or in tcpcrypt if the Session ID is
 verified by the outer layer.

Hi Zooko,

Are you and John talking about the same thing?

John's talking about tunnelling a redundant inner record layer of
encryption inside an outer record layer (using TLS terminology).

I think you're talking about a couple different-but-related things:

 * channel binding, where an unauthenticated-but-encrypted channel
can be authenticated by performing an inside-the-channel
authentication which commits to values uniquely identifying the outer
channel (note that the inner vs outer distinction has flipped
around here!)

 * out-of-band verification, where a channel is authenticated by
communicating values identifying the channel (fingerprint, SAS,
sessionIDs) over some other, authenticated channel (e.g. ZRTP's use of
the signalling channel to protect the media channel).

So I think you're focusing on *modularity* between authentication
methods and the record layer, whereas I think John's getting at
*redundancy*.


 I think the way that SSL combined transport layer security,
 authorization, and identification was a terrible idea. I (and others)
 have been saying all along that it was a bad idea, and I hope that the
 related security disasters during the last two years have started
 persuading more people to rethink it, too.

This seems like a different thing again.  I agree that TLS could have
been more modular wrt key agreement and public-key authentication.
 It would be nice if the keys necessary to compute a TLS handshake
were part of TLS, instead of requiring X.509 certs.  This would avoid
self-signed certs, and would allow the client to request various
proofs for the server's public key, which could be X.509, other cert
formats, or other info (CT, TACK, DNSSEC, revocation data, etc.).

But this seems like a minor layering flaw, I'm not sure it should be
blamed for any TLS security problems.  The problems with chaining CBC
IVs, plaintext compression, authenticate-then-encrypt, renegotiation,
and a non-working upgrade path aren't solved by better modularity, nor
are they solved by redundancy.  They're solved by making better
choices.


 I guess the designers of
 SSL were simply following the lead of the original inventors of public
 key cryptography, who delegated certain critical unsolved problems to
 an underspecified Trusted Third Party. What a colossal, historic
 mistake.

If you're talking about the New Directions paper, Diffie and Hellman
talk about a public file.  Certificates were a later idea, due to
Kohnfelder... I'd argue that's where things went wrong...


 1. Go, go, go! The path your thoughts are taking seems fruitful. Just
 design a really good inner layer of crypto, without worrying (for
 now) about the vexing and subtle problems of authorization,
 authentication, naming, Man-In-The-Middle-Attack and so on. For now.

That's easy though, right?  Use a proper KDF from a shared secret, do
authenticated encryption, don't f*ck up the IVs

The worthwhile problems are the hard ones, no? :-)


Trevor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key stretching

2013-10-11 Thread Peter Gutmann
Phillip Hallam-Baker hal...@gmail.com writes:

Quick question, anyone got a good scheme for key stretching?

http://lmgtfy.com/?q=hkdfl=1

Peter :-).
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread John Kelsey
The problem with offensive cyberwarfare is that, given the imbalance between 
attackers and defenders and the expanding use of computer controls in all sorts 
of systems, a cyber war between two advanced countries will not decide anything 
militarily, but will leave both combattants much poorer than they were 
previously, cause some death and a lot of hardship and bitterness, and leave 
the actual hot war to be fought. 

Imagine a conflict that starts with both countries wrecking a lot of each 
others' infrastructure--causing refineries to burn, factories to wreck 
expensive equipment, nuclear plants to melt down, etc.  A week later, that 
phase of the war is over.  Both countries are, at that point, probalby 10-20% 
poorer than they were a week earlier.  Both countries have lots of really 
bitter people out for blood, because someone they care about was killed or 
their job's gone and their house burned down or whatever.  But probably there's 
been little actual degradation of their standard war-fighting ability.  Their 
civilian aviation system may be shut down, some planes may even have been 
crashed, but their bombers and fighters and missiles are mostly still working.  
Fuel and spare parts may be hard to come by, but the military will certainly 
get first pick.  My guess is that what comes next is that the two countries 
have a standard hot war, but with the pleasant addition of a great depression 
 sized economic collapse for both right in the middle of it.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:  

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.  

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.  

Thoughts?

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:18 PM, crypto@gmail.com (John Kelsey) wrote:

We know how to address one part of this problem--choose only 
algorithms whose design strength is large enough that there's 
not some relatively close by time when the algorithms will need 
to be swapped out.  That's not all that big a problem now--if 
you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not 
Moore's Law.  Really, even with 128-bit security level 
primitives, it will be a very long time until the brute-force 
attacks are a concern.


We should try to characterize what a very long time is in 
years. :-)



This is actually one thing we're kind-of on the road to doing 
right in standards now--we're moving away from 
barely-strong-enough crypto and toward crypto that's going to 
be strong for a long time to come.


We had barely-strong-enough crypto because we couldn't afford 
the computation time for longer key sizes. I hope things are 
better now, although there may still be a problem for certain 
devices. Let's hope they are only needed in low security/low 
value applications.



Protocol attacks are harder, because while we can choose a key 
length, modulus size, or sponge capacity to support a known 
security level, it's not so easy to make sure that a protocol 
doesn't have some kind of attack in it.
I think we've learned a lot about what can go wrong with 
protocols, and we can design them to be more ironclad than in 
the past, but we still can't guarantee we won't need to 
upgrade.  But I think this is an area that would be interesting 
to explore--what would need to happen in order to get more 
ironclad protocols?  A couple random thoughts:


I fully agree that this is a valuable area to research.



a.  Layering secure protocols on top of one another might 
provide some redundancy, so that a flaw in one didn't undermine 
the security of the whole system.


Defense in depth has been useful from longer ago than the 
Trojans and Greeks.



b.  There are some principles we can apply that will make 
protocols harder to attack, like encrypt-then-MAC (to eliminate 
reaction attacks), nothing is allowed to need change its 
execution path or timing based on the key or plaintext, every 
message includes a sequence number and the hash of the previous 
message, etc.  This won't eliminate protocol attacks, but will 
make them less common.


I think that the attacks on MAC-then-encrypt and timing attacks 
were first described within the last 15 years. I think it is 
only normal paranoia to think there may be some more equally 
interesting discoveries in the future.



c.  We could try to treat at least some kinds of protocols more 
like crypto algorithms, and expect to have them widely vetted 
before use.


Most definitely! Lots of eye. Formal proofs because they are a 
completely different way of looking at things. Simplicity. All 
will help.




What else?
...
Perhaps the shortest limit on the lifetime of an embedded 
system is the security protocol, and not the hardware. If so, 
how do we as society deal with this limit.


What we really need is some way to enforce protocol upgrades 
over time.  Ideally, there would be some notion that if you 
support version X of the protocol, this meant that you would 
not support any version lower than, say, X-2.  But I'm not sure 
how practical that is.


This is the direction I'm pushing today. If you look at auto 
racing you will notice that the safety equipment commonly used 
before WW2 is no longer permitted. It is patently unsafe. We 
need to make the same judgements in high security/high risk applications.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread Lodewijk andré de la porte
2013/10/9 Phillip Hallam-Baker hal...@gmail.com

 I see cyber-sabotage as being similar to use of chemical or biological
 weapons: It is going to be banned because the military consequences fall
 far short of being decisive, are unpredictable and the barriers to entry
 are low.


I doubt that's anywhere near how they'll be treated. Bio en Chem are banned
for their extreme relative effectiveness and far greater cruelty than most
weapons have. Bleeding out is apparently considered quite human, compared
to chocking on foamed up parts of your own lungs. Cyberwarfare will likely
be effectively counteracted by better security. The more I think the less I
understand fall far short of being decisive. If cyber is out you switch
to old-school tactics. If chemical or biological happens it's either death
for hundreds or thousands or nothing happens.

Of course the bigger armies will want to keep it away from the
terrorists, it'd level the playing field quite a bit. A 200 losses, 2000
kills battle could turn into 1200 losses, 1700 kills quite fast. But that's
not what I'd call a ban.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-10 Thread Lodewijk andré de la porte
2013/10/10 Phillip Hallam-Baker hal...@gmail.com

  The original author was proposing to use the same key for encryption and
 signature which is a rather bad idea.


Explain why, please. It might expand the attack surface, that's true. You
could always add a signed message that says I used a key named 'Z' for
encryption here. Would that solve the problem?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:12 PM, watsonbl...@gmail.com (Watson Ladd) wrote:


On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:
... As professionals, we have an obligation to share our 
knowledge of the limits of our technology with the people who 
are depending on it. We know that all crypto standards which 
are 15 years old or older are obsolete, not recommended for 
current use, or outright dangerous. We don't know of any way 
to avoid this problem in the future.


15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.


When I developed the VatTP crypto protocol for the E language 
www.erights.org about 15 years ago, key sizes of 1024 bits 
were high security. Now they are seriously questioned. 3DES was 
state of the art. No widely distributed protocols used 
Feige-Fiat-Shamir or Schnorr signatures. Do any now? I stand by 
my statement.




I think the burden of proof is on the people who suggest that 
we only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


... long post of problems with TLS, most of which are valid 
criticisms deleted as not addressing the above questions.



Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


I agree with this general direction, but I still don't have the 
warm fuzzies that good answers to the above questions might 
give. I have seen too many projects to do it right that didn't 
pull it off.


See also my response to John Kelsey.

Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Peter Gutmann
Watson Ladd watsonbl...@gmail.com writes:

The obvious solution: Do it right the first time.

And how do you know that you're doing it right?  PGP in 1992 adopted a
bleeding-edge cipher (IDEA) and was incredibly lucky that it's stayed secure
since then.  What new cipher introduced up until 1992 has had that
distinction?  Doing it right the first time is a bit like the concept of
stopping rules in heuristic decision-making, if they were that easy then
people wouldn't be reading this list but would be in Las Vegas applying the
stopping rule stop playing just before you start losing.

This is particularly hard in standards-based work because any decision about
security design tends to rapidly degenerate into an argument about whose
fashion statement takes priority.  To get back to an earlier example that I
gave on the list, the trivial and obvious fix to TLS of switching from MAC-
then-encrypt to encrypt-then-MAC is still being blocked by the WG chairs after
nearly a year, despite the fact that a straw poll on the list indicated
general support for it (rough consensus) and implementations supporting it are
already deployed (running code).  So do it right the first time is a lot
easier said than done.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Salz, Rich
 TLS was designed to support multiple ciphersuites. Unfortunately this opened 
 the door
 to downgrade attacks, and transitioning to protocol versions that wouldn't do 
 this was nontrivial.
 The ciphersuites included all shared certain misfeatures, leading to the 
 current situation.

On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.

Sometimes half a loaf is better than nothing.

/r$
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread Lodewijk andré de la porte
2013/10/10 John Kelsey crypto@gmail.com

 The problem with offensive cyberwarfare is that, given the imbalance
 between attackers and defenders and the expanding use of computer controls
 in all sorts of systems, a cyber war between two advanced countries will
 not decide anything militarily, but will leave both combattants much poorer
 than they were previously, cause some death and a lot of hardship and
 bitterness, and leave the actual hot war to be fought.


I think you'd only employ most the offensive means in harmony with the
start of the hot war. That makes a lot more sense than annoying your
opponent.


 Imagine a conflict that starts with both countries wrecking a lot of each
 others' infrastructure--causing refineries to burn, factories to wreck
 expensive equipment, nuclear plants to melt down, etc.  A week later, that
 phase of the war is over.  Both countries are, at that point, probalby
 10-20% poorer than they were a week earlier.


I think this would cause more than 20% damage (esp. the nuclear reactor!).
But I can imagine a slow buildup of disabled things happening.


 Both countries have lots of really bitter people out for blood, because
 someone they care about was killed or their job's gone and their house
 burned down or whatever.  But probably there's been little actual
 degradation of their standard war-fighting ability.  Their civilian
 aviation system may be shut down, some planes may even have been crashed,
 but their bombers and fighters and missiles are mostly still working.  Fuel
 and spare parts may be hard to come by, but the military will certainly get
 first pick.  My guess is that what comes next is that the two countries
 have a standard hot war, but with the pleasant addition of a great
 depression sized economic collapse for both right in the middle of it.


This would be a mayor plus in the eyes of the countries' leaders.
Motivating people for war is the hardest thing about it. I do think the
military relies heavily on electronic tools for coordination. And I think
they have plenty of parts stockpiled for a proper blitzkrieg.

Most the things you mentioned can be achieved with infiltration and covert
operations, which are far more traditional. And far harder to do at great
scale. But they are not done until there is already a significant blood
thirst.

I'm not sure what'd happen, simply put. But I think it'll become just
another aspect of warfare. It is already another aspect of the cover
operations, and we haven't lived a high-tech vs high-tech war. And if it
does happen, the chance we live to talk about it is less than I'd like.

You pose an interesting notion about the excessiveness of causing a great
depression before the first bullets fly. I counter that with the effects of
conventional warfare being more excessively destructive.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
Having a public bulletin board of posted emails, plus a protocol for 
anonymously finding the ones your key can decrypt, seems like a pretty decent 
architecture for prism-proof email.  The tricky bit of crypto is in making 
access to the bulletin board both efficient and private.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Stephen Farrell


 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 
 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.
 
 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:  
 
 a.  Generate random a and send aG on curve P256
 
 b.  Generate random b and send bG on curve P256
 
 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.
 
 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.  
 
 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.  
 
 Thoughts?


Suggest it on the tls wg list as a feature of 1.3?

S

 
 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Salz, Rich
 The simple(-minded) idea is that everybody receives everybody's email, but 
 can only read their own.  Since everybody gets everything, the metadata is 
 uninteresting and traffic analysis is largely fruitless.

Some traffic analysis is still possible based on just message originator.  If I 
see a message from A, and then soon see messages from B and C, then I can 
perhaps assume they are collaborating.  If I A's message is significantly 
larger then the other two, then perhaps they're taking some kind of vote.

So while it's a neat hack, I think the claims are overstated.

/r$
 
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Jerry Leichter
On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
 Very silly but trivial to implement so I went ahead and did so:
 
 To send a prism-proof email, encrypt it for your recipient and send it
 to irrefrangi...@mail.unipay.nl
Nice!  I like it.

A couple of comments:

1.  Obviously, this has scaling problems.  The interesting question is how to 
extend it while retaining the good properties.  If participants are willing to 
be identified to within 1/k of all the users of the system (a set which will 
itself remain hidden by the system), choosing one of k servers based on a hash 
of the recipient would work.  (A concerned recipient could, of course, check 
servers that he knows can't possibly have his mail.)  Can one do better?

2.  The system provides complete security for recipients (all you can tell 
about a recipient is that he can potentially receive messages - though the 
design has to be careful so that a recipient doesn't, for example, release 
timing information depending on whether his decryption succeeded or not).  
However, the protection is more limited for senders.  A sender can hide its 
activity by simply sending random messages, which of course no one will ever 
be able to decrypt.  Of course, that adds yet more load to the entire system.

3.  Since there's no acknowledgement when a message is picked up, the number of 
messages in the system grows without bound.  As you suggest, the service will 
have to throw out messages after some time - but that's a blind process which 
may discard a message a slow receiver hasn't had a chance to pick up while 
keeping one that was picked up a long time ago.  One way around this, for 
cooperative senders:  When creating a message, the sender selects a random R 
and appends tag Hash(R).  Anyone may later send a you may delete message R 
message.  A sender computes Hash(R), finds any message with that tag, and 
discards it.  (It will still want to delete messages that are old, but it may 
be able to define old as a larger value if enough of the senders are 
cooperative.)

Since an observer can already tell who created the message with tag H(R), it 
would normally be the original sender who deletes his messages.  Perhaps he 
knows they are no longer important; or perhaps he received an application-level 
acknowledgement message from the recipient.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread arxlight
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cool.

Drop me a note if you want hosting (gratis) for this.

On 10/10/13 10:22 PM, Jerry Leichter wrote:
 On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl
 wrote:
 Very silly but trivial to implement so I went ahead and did
 so:
 
 To send a prism-proof email, encrypt it for your recipient and
 send it to irrefrangi...@mail.unipay.nl
 Nice!  I like it.
 
 A couple of comments:
 
 1.  Obviously, this has scaling problems.  The interesting question
 is how to extend it while retaining the good properties.  If
 participants are willing to be identified to within 1/k of all the
 users of the system (a set which will itself remain hidden by the
 system), choosing one of k servers based on a hash of the recipient
 would work.  (A concerned recipient could, of course, check servers
 that he knows can't possibly have his mail.)  Can one do better?
 
 2.  The system provides complete security for recipients (all you
 can tell about a recipient is that he can potentially receive
 messages - though the design has to be careful so that a recipient
 doesn't, for example, release timing information depending on
 whether his decryption succeeded or not).  However, the protection
 is more limited for senders.  A sender can hide its activity by
 simply sending random messages, which of course no one will ever
 be able to decrypt.  Of course, that adds yet more load to the
 entire system.
 
 3.  Since there's no acknowledgement when a message is picked up,
 the number of messages in the system grows without bound.  As you
 suggest, the service will have to throw out messages after some
 time - but that's a blind process which may discard a message a
 slow receiver hasn't had a chance to pick up while keeping one that
 was picked up a long time ago.  One way around this, for
 cooperative senders:  When creating a message, the sender selects a
 random R and appends tag Hash(R).  Anyone may later send a you may
 delete message R message.  A sender computes Hash(R), finds any
 message with that tag, and discards it.  (It will still want to
 delete messages that are old, but it may be able to define old as
 a larger value if enough of the senders are cooperative.)
 
 Since an observer can already tell who created the message with tag
 H(R), it would normally be the original sender who deletes his
 messages.  Perhaps he knows they are no longer important; or
 perhaps he received an application-level acknowledgement message
 from the recipient. -- Jerry
 
 ___ The cryptography
 mailing list cryptography@metzdowd.com 
 http://www.metzdowd.com/mailman/listinfo/cryptography
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)

iQIcBAEBAgAGBQJSVxYkAAoJEAWtgNHk7T8Q+uwP/0sWLASYrvKHkVYo4yEjLLYK
+s4Yfnz4sBJRUkndj6G3mhk+3lutcMiMhD2pWaTjo/FENCqMveiReI3LiA57aJ9l
eaB2whG8pslm+NKirFJ//3AL6mBPJEqeH4QfrfaxNbu61T3oeU9jwihQ/1XpZUxb
F1vPGN5GZyrW4GdNBWW+0bzgjoBKsyBNTe/0F/JhtKz/KD6aEQjzeNDJkgm4z6DA
Euf+qYT+K3QlWWe8IMxliJcP4HacKhUPO6YUCx6mjbz34zNNa3th4eXXTzlcTWUR
LWFXcDnmor3E9yMdFOdtN8+qXvauyi5HGq55Rge3fZ/TqZbNrfPh2AWqDSd/N1rW
TFkx9w7b3ndfbkipK51lrdJsZcOudDgvPVnZUZBNm8H7dHi4jb4CJz+Cfr7e7Ar8
wze58qz/kYFqZ7h91e/m4TaIM+jXtPteAM2HZnAAtx3daNqcbcFd8DRtZGdOpjWt
ugz2f1NUQrj8f17jUFRwIZfwi2E6wBfKTfVebQy7kMMBbN3fwvIHjyXJTHaz6o0I
AX1u3bvAilFdxObwULP4PRl7ReDB42XonCf90VHSDetE/qHQy4CKiIiMrGQIlY7Y
NhyAkd3dGvs57TP5gH+d39G0hkJ/iBqgaJtHcU1CwMxYABNasj2yyKPzA7Lvma62
8qzw2uTKepVPUkCjbqcy
=mvZ0
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread lists
 Having a public bulletin board of posted emails, plus a protocol
 for anonymously finding the ones your key can decrypt, seems
 like a pretty decent architecture for prism-proof email.
 The tricky bit of crypto is in making access to the bulletin
 board both efficient and private.

This idea has been around for a while but not built AFAIK.
http://petworkshop.org/2003/slides/talks/stef/pet2003/Lucky_Green_Anonmail_PET_2003.ppt
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
More random thoughts:

The minimal inner protocol would be something like this:

Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit 
counter, and an AES-CMAC-based KDF, we do the following:

Sender: 
a.  Generate random 128 bit value R
b.  Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96)
c.  Sender's 32-bit unsigned counter C[S] starts at 0.
d.  Compute IV[S,0] = 96 bits of binary 0s||C[S]
e.  Send R, CCM(K[S],N[S],IV[S,0],sender_message[0])

Receiver:
a.  Receive R and derive K[S],N[S],K[R],N[R] from it as above.
b.  Set Receiver's counter C[R] = 0.
c.  Compute IV[R,0] = 96 bits of binary 0s||C[R]
d.  Send CCM(K[R],N[R],IV[R,0],receiver_message[0])

and so on.  

Note that in this protocol, we never send a key or IV or nonce.  The total 
communications overhead of the inner protocol is an extra 160 bits in the first 
message and an extra 32 bits thereafter.  We're assuming the outer protocol is 
taking care of message ordering and guaranteed delivery--otherwise, we need to 
do something more complicated involving replay windows and such, and probably 
have to send along the message counters.  

This doesn't provide a huge amount of extra protection--if the attacker can 
recover more than a very small number of bits from the first message (attacking 
through the outer protocol), then the security of this protocol falls apart.  
But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or 
SSH or whatever other thing we're doing.  

Both this and the previous protocol I sketched have the property that they 
expect to be able to generate random numbers.  There's a problem there, 
though--if the system RNG is weak or trapdoored, it could compromise both the 
inner and outer protocol at the same time.  

One way around this is to have each endpoint that uses the inner protocol 
generate its own internal secret AES key, Q[i].  Then, when it's time to 
generate a random value, the endpoint asks the system RNG for a random number 
X, and computes E_Q(X).  If the attacker knows Q but the system RNG is secure, 
we're fine.  Similarly, if the attacker can predict X but doesn't know Q, we're 
fine.  Even when the attacker can choose the value of X, he can really only 
force the random value in the beginning of the protocol to repeat.  In this 
protocol, that doesn't do much harm.  

The same idea works for the ECDH protocol I sketched earlier.  I request two 
128 bit random values from the system RNG, X, X'.  I then use E_Q(X)||E_Q(X') 
as my ephemeral DH private key. If an attacker knows Q but the system RNG is 
secure, then we get an unpredictable value for the ECDH key agreement.  If an 
attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral 
private key is.  If he forces it to a repeated value, he still doesn't weaken 
anything except this run of the protocol--no long-term secret is leaked if AES 
isn't broken.  

This is subject to endless tweaking and improvement.  But the basic idea seems 
really valuable:  

a.  Design an inner protocol, whose job is to provide redundancy in security 
against attacks on the outer protocol.

b.  The inner protocol should be:

(i)  As cheap as possible in bandwidth and computational terms.

(ii) Flexible enough to be used extremely widely, implemented in most places, 
etc.  

(iii) Administratively free, adding no key management or related burdens.

(iv) Free from revisions or updates, because the whole point of the inner 
protocol is to provide redundant security.  (That's part of administratively 
free.)  

(v)  There should be one or at most two versions (maybe something like the two 
I've sketched, but better thought out and analyzed).

c.  As much as possible, we want the security of the inner protocol to be 
independent of the security of the outer protocol.  (And we want this without 
wanting to know exactly what the outer protocol will look like.)  This means:

(i)  No shared keys or key material or identity strings or anything.

(ii) The inner protocol can't rely on the RNG being good.

(iii) Ideally, the crypto algorithms would be different, though that may impose 
too high a cost.  At least, we want as many of the likely failure modes to be 
different.  

Comments?  I'm not all that concerned with the protocol being perfect, but what 
do you think of the idea of doing this as a way to add redundant security 
against protocol attacks?  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Richard Outerbridge
On 2013-10-10 (283), at 15:29:33, Stephen Farrell stephen.farr...@cs.tcd.ie 
wrote:

 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 

[]

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

How does this prevent MITM?  Where does G come from?

I'm also leery of using literally the same key in both directions.  Maybe a 
simple transform would suffice; maybe not.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side. 

If the same key is used, there needs to be a simple way of ensuring the 
sequence numbers can never overlap each other.
__outer



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Ray Dillinger
On 10/10/2013 12:54 PM, John Kelsey wrote:
 Having a public bulletin board of posted emails, plus a protocol 
 for anonymously finding the ones your key can decrypt, seems 
 like a pretty decent architecture for prism-proof email.  The 
 tricky bit of crypto is in making access to the bulletin board 
 both efficient and private.  

Wrong on both counts, I think.  If you make access private, you
generate metadata because nobody can get at mail other than their
own.  If you make access efficient, you generate metadata because
you're avoiding the wasted bandwidth that would otherwise prevent
the generation of metadata. Encryption is sufficient privacy, and
efficiency actively works against the purpose of privacy.

The only bow I'd make to efficiency is to split the message stream
into channels when it gets to be more than, say, 2GB per day. At
that point you would need to know both what channel your recipient
listens to *and* the appropriate encryption key before you could
send mail.

Bear




___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread John Gilmore
 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?

It's just a practice.  I agree that building a small amount of automation
for key signing parties would improve the web of trust.

I have started on a prototype that would automate small key signing
parties (as small as 2 people, as large as a few dozen) where everyone
present has a computer or phone that is on the same wired or wireless
LAN.

 I am specifically thinking of ways that key signing parties might be made
 scalable so that it was possible for hundreds of thousands of people...

An important user experience point is that we should be teaching GPG
users to only sign the keys of people who they personally know.
Having a signature that says, This person attended the RSA conference
in October 2013 is not particularly useful.  (Such a signature could
be generated by the conference organizers themselves, if they wanted
to.)  Since the conference organizers -- and most other attendees --
don't know what an attendee's real identity is, their signature on
that identity is worthless anyway.

So, if I participate in a key signing party with a dozen people, but I
only personally know four of them, I will only sign the keys of those
four.  I may have learned a public key for each of the dozen, but that
is separate from me signing those keys.  Signing them would assert to
any stranger that I know that this key belongs to this identity, which
would be false and would undermine the strength of the web of trust.

John


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge ou...@sympatico.ca wrote:
 
 How does this prevent MITM?  Where does G come from?

I'm assuming G is a systemwide shared parameter.  It doesn't prevent 
mitm--remember the idea here is to make a fairly lightweight protocol to run 
*inside* another crypto protocol like TLS.  The inner protocol mustn't add 
administrative requirements to the application, which means it can't need key 
management from some administrator or something.  The goal is to have an inner 
protocol which can run inside TLS or some similar thing, and which adds a layer 
of added security without the application getting more complicated by needing 
to worry about more keys or certificates or whatever.  

Suppose we have this inner protocol running inside a TLS version that is 
subject to one of the CBC padding reaction attacks.  The inner protocol 
completely blocks that.  

 I'm also leery of using literally the same key in both directions.  Maybe a 
 simple transform would suffice; maybe not.

I probably wasn't clear in my writeup, but my idea was to have different keys 
in different directions--there is a NIST KDF that uses only AES as its crypto 
engine, so this is relatively easy to do using standard components.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:20 PM, Ray Dillinger b...@sonic.net wrote:

 On 10/10/2013 12:54 PM, John Kelsey wrote:
 Having a public bulletin board of posted emails, plus a protocol 
 for anonymously finding the ones your key can decrypt, seems 
 like a pretty decent architecture for prism-proof email.  The 
 tricky bit of crypto is in making access to the bulletin board 
 both efficient and private.  
 
 Wrong on both counts, I think.  If you make access private, you
 generate metadata because nobody can get at mail other than their
 own.  If you make access efficient, you generate metadata because
 you're avoiding the wasted bandwidth that would otherwise prevent
 the generation of metadata. Encryption is sufficient privacy, and
 efficiency actively works against the purpose of privacy.

So the original idea was to send a copy of all the emails to everyone.  What 
I'm wanting to figure out is if there is a way to do this more efficiently, 
using a public bulletin board like scheme.  The goal here would be:

a.  Anyone in the system can add an email to the bulletin board, which I am 
assuming is public and cryptographically protected (using a hash chain to make 
it impossible for even the owner of the bulletin board to alter things once 
published).

b.  Anyone can run a protocol with the bulletin board which results in them 
getting only the encrypted emails addressed to them, and prevents the bulletin 
board operator from finding out which emails they got.

This sounds like something that some clever crypto protocol could do.  (It's 
related to the idea of searching on encrypted data.). And it would make an 
email system that was really resistant to tracing users.  

Bear

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread Glenn Willen
John,

On Oct 10, 2013, at 2:31 PM, John Gilmore wrote:
 
 An important user experience point is that we should be teaching GPG
 users to only sign the keys of people who they personally know.
 Having a signature that says, This person attended the RSA conference
 in October 2013 is not particularly useful.  (Such a signature could
 be generated by the conference organizers themselves, if they wanted
 to.)  Since the conference organizers -- and most other attendees --
 don't know what an attendee's real identity is, their signature on
 that identity is worthless anyway.
 
 So, if I participate in a key signing party with a dozen people, but I
 only personally know four of them, I will only sign the keys of those
 four.  I may have learned a public key for each of the dozen, but that
 is separate from me signing those keys.  Signing them would assert to
 any stranger that I know that this key belongs to this identity, which
 would be false and would undermine the strength of the web of trust.

I am going to be interested to hear what the rest of the list says about this, 
because this definitely contradicts what has been presented to me as 'standard 
practice' for PGP use -- verifying identity using government issued ID, and 
completely ignoring personal knowledge.

Do you have any insight into what proportion of PGP/GPG users mean their 
signatures as personal knowledge (my preference and evidently yours), versus 
government ID (my perception of the community standard best practice), 
versus no verification in particular (my perception of the actual common 
practice in many cases)?

(In my ideal world, we'd have a machine readable way of indication what sort of 
verification was performed. Signing policies, not being machine readable or 
widely used, don't cover this well. There is space for key-value annotations in 
signature packets, which could help with this if we standardized on some.)

Glenn Willen
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Denker
On 10/10/2013 02:20 PM, Ray Dillinger wrote:

 split the message stream
 into channels when it gets to be more than, say, 2GB per day.

That's fine, in the case where the traffic is heavy.

We should also discuss the opposite case:

*) If the traffic is light, the servers should generate cover traffic.

*) Each server should publish a public key for /dev/null so that
 users can send cover traffic upstream to the server, without
 worrying that it might waste downstream bandwidth.

 This is crucial for deniabililty:  If the rubber-hose guy accuses
 me of replying to ABC during the XYZ crisis, I can just shrug and 
 say it was cover traffic.


Also:

*) Messages should be sent in standard-sized packets, so that the
 message-length doesn't give away the game.

*) If large messages are common, it might help to have two streams:
 -- the pointer stream, and
 -- the bulk stream.

It would be necessary to do a trial-decode on every message in the
pointer stream, but when that succeeds, it yields a pilot message
containing the fingerprints of the packets that should be pulled 
out of the bulk stream.  The first few bytes of the packet should 
be a sufficient fingerprint.  This reduces the number of trial-
decryptions by a factor of roughly sizeof(message) / sizeof(packet).


From the keen-grasp-of-the-obvious department:

*) Forward Secrecy is important here.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread grarpamp
On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
 To send a prism-proof email, encrypt it for your recipient and send it
 to irrefrangi...@mail.unipay.nl.  Don't include any information about

 To receive prism-proof email, subscribe to the irrefrangible mailing
 list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/.  Use a

This is the same as NNTP, but worse in that it's not distributed.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Lars Luthman
On Thu, 2013-10-10 at 14:20 -0700, Ray Dillinger wrote: 
 Wrong on both counts, I think.  If you make access private, you
 generate metadata because nobody can get at mail other than their
 own.  If you make access efficient, you generate metadata because
 you're avoiding the wasted bandwidth that would otherwise prevent
 the generation of metadata. Encryption is sufficient privacy, and
 efficiency actively works against the purpose of privacy.
 
 The only bow I'd make to efficiency is to split the message stream
 into channels when it gets to be more than, say, 2GB per day. At
 that point you would need to know both what channel your recipient
 listens to *and* the appropriate encryption key before you could
 send mail.

This is starting to sound a lot like Bitmessage, doesn't it? A central
message stream that is split into a tree of streams when it gets too
busy and everyone tries to decrypt every message in their stream to see
if they are the recipient. In the case of BM the stream is distributed
in a P2P network, the stream of an address is found by walking the tree,
and you need a hash collision proof-of-work in order for other peers to
accept your sent messages. The P2P aspect and the proof-of-work
(according to the whitepaper[1] it should represent 4 minutes of work on
an average computer) probably makes it less attractive for mobile
devices though.

[1] https://bitmessage.org/bitmessage.pdf


--ll


signature.asc
Description: This is a digitally signed message part
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread Paul Hoffman
On Oct 10, 2013, at 2:31 PM, John Gilmore g...@toad.com wrote:

 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?
 
 It's just a practice.  I agree that building a small amount of automation
 for key signing parties would improve the web of trust.
 
 I have started on a prototype that would automate small key signing
 parties (as small as 2 people, as large as a few dozen) where everyone
 present has a computer or phone that is on the same wired or wireless
 LAN.

Phil Zimmerman and Jon Callas had started to work on that around 1998, they 
might still have some of that design around.

--Paul Hoffman

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Trevor Perrin
On Thu, Oct 10, 2013 at 3:32 PM, John Kelsey crypto@gmail.com wrote:
  The goal is to have an inner protocol which can run inside TLS or some 
 similar thing
[...]

 Suppose we have this inner protocol running inside a TLS version that is 
 subject to one of the CBC padding reaction attacks.  The inner protocol 
 completely blocks that.

If you can design an inner protocol to resist such attacks - which
you can, easily - why wouldn't you just design the outer protocol
the same way?


Trevor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Other Backdoors?

2013-10-10 Thread David Mercer
 Thursday, October 10, 2013, Phillip Hallam-Baker wrote:


 [Can't link to FIPS180-4 right now as its down]


For the lazy among us, including my future self, a shutdown-proof url to
the archive.org copy of the NIST FIPS 180-4 pdf:
 http://tinyurl.com/FIPS180-4

-David Mercer




-- 
David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt
Fingerprint: A24F 5816 2B08 5B37 5096  9F52 B182 3349 0F23 225B
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread David Mercer
On Thursday, October 10, 2013, Salz, Rich wrote:

  TLS was designed to support multiple ciphersuites. Unfortunately this
 opened the door
  to downgrade attacks, and transitioning to protocol versions that
 wouldn't do this was nontrivial.
  The ciphersuites included all shared certain misfeatures, leading to the
 current situation.

 On the other hand, negotiation let us deploy it in places where
 full-strength cryptography is/was regulated.

 Sometimes half a loaf is better than nothing.


 The last time various SSL/TLS ciphersuites needed to be removed from
webserver configurations when I managed a datacenter some years ago led to
the following 'failure modes', either from the user's browser now warning
or refusing to connect to a server using an insecure cipher suite, or when
the only cipher suites used by a server weren't supported by an old browser
(or both at once):

1) for sites that had low barriers to switching, loss of traffic/customers
to sites that didn't drop the insecure ciphersuites

2) for sites that are harder to leave (your bank, google/facebook level
sticky public ones [less common]), large increases in calls to support,
with large costs for the business. Non-PCI compliant businesses taking CC
payments are generally so insecure that customers that fled to them really
are uppung their chances of suffering  fraud.

In both cases you have a net decrease of security and an increase of fraud
and financial loss.

So in some cases anything less than a whole loaf, which you can't guarantee
for N years of time, isn't 'good enough.' In other words, we are screwed no
matter what.

-David Mercer



-- 
David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt
Fingerprint: A24F 5816 2B08 5B37 5096  9F52 B182 3349 0F23 225B
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 7:38 AM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
  If we can't select ciphersuites that we are sure we will always be
 comfortable with (for at least some forseeable lifetime) then we urgently
 need the ability to *stop* using them at some point.  The examples of MD5
 and RC4 make that pretty clear.
  Ceasing to use one particular encryption algorithm in something like
 SSL/TLS should be the easiest case--we don't have to worry about old
 signatures/certificates using the outdated algorithm or anything.  And yet
 we can't reliably do even that.
 
  We seriously need to consider what the design lifespan of our crypto
 suites is in real life. That data should be communicated to hardware and
 software designers so they know what kind of update schedule needs to be
 supported. Users of the resulting systems need to know that the crypto
 standards have a limited life so they can include update in their
 installation planning.
 This would make a great April Fool's RFC, to go along with the classic
 evil bit.  :-(

 There are embedded systems that are impractical to update and have
 expected lifetimes measured in decades.  RFID chips include cryptography,
 are completely un-updatable, and have no real limit on their lifetimes -
 the percentage of the population represented by any given vintage of
 chips will drop continuously, but it will never go to zero.  We are rapidly
 entering a world in which devices with similar characteristics will, in
 sheer numbers, dominate the ecosystem - see the remote-controllable
 Phillips Hue light bulbs (
 http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:
 http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.
  The response from Phillips to that article says In developing Hue we have
 used industry standard encryption and authentication techni
  ques  [O]ur main advice to customers is that they take steps to
 ensure they are secured from malicious attacks at a network level.

 The obvious solution: Do it right the first time. Many of the TLS issues
we are dealing with today were known at the time the standard was being
developed. RFID usually isn't that security critical: if a shirt insists
its an ice cream, a human will usually be around to see that it is a shirt.
AES will last forever, unless cryptoanalytic advances develop. Quantum
computers will doom ECC, but in the meantime we are good.

Cryptography in the two parties authenticating and communicating is a
solved problem. What isn't solved, and behind many of these issues is 1)
getting the standard committees up to speed and 2) deployment/PKI issues.


 I'm afraid the reality is that we have to design for a world in which some
 devices will be running very old versions of code, speaking only very old
 versions of protocols, pretty much forever.  In such a world, newer devices
 either need to shield their older brethren from the sad realities or
 relegate them to low-risk activities by refusing to engage in high-risk
 transactions with them.  It's by no means clear how one would do this, but
 there really aren't any other realistic alternatives.

Great big warning lights saying Insecure device! Do not trust!. If Wells
Fargo customers got a Warning: This site is using outdated security when
visiting it on all browsers, they would fix that F5 terminator currently
stopping the rest of us from deploying various TLS extensions.

 -- Jerry

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography




-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Iran and murder

2013-10-09 Thread Phillip Hallam-Baker
On Wed, Oct 9, 2013 at 12:44 AM, Tim Newsham tim.news...@gmail.com wrote:

  We are more vulnerable to widespread acceptance of these bad principles
 than
  almost anyone, ultimately,  But doing all these things has won larger
 budgets
  and temporary successes for specific people and agencies today, whereas
  the costs of all this will land on us all in the future.

 The same could be (and has been) said about offensive cyber warfare.


I said the same thing in the launch issue of cyber-defense. Unfortunately
the editor took it into his head to conflate inventing the HTTP referer
field etc. with rather more and so I can't point people at the article as
they refuse to correct it.


I see cyber-sabotage as being similar to use of chemical or biological
weapons: It is going to be banned because the military consequences fall
far short of being decisive, are unpredictable and the barriers to entry
are low.

STUXNET has been relaunched with different payloads countless times. So we
are throwing stones the other side can throw back with greater force.


We have a big problem in crypto because we cannot now be sure that the help
received from the US government in the past has been well intentioned or
not. And so a great deal of time is being wasted right now (though we will
waste orders of magnitude more of their time).

At the moment we have a bunch of generals and contractors telling us that
we must spend billions on the ability to attack China's power system in
case they attack ours. If we accept that project then we can't share
technology that might help them defend their power system which cripples
our ability to defend our own.

So a purely hypothetical attack promoted for the personal enrichment of a
few makes us less secure, not safer. And the power systems are open to
attack by sufficiently motivated individuals.


The sophistication of STUXNET lay in its ability to discriminate the
intended target from others. The opponents we face simply don't care about
collateral damage. So  I am not impressed by people boasting about the
ability of some country (not an ally of my country BTW) to perform targeted
murder overlooks the fact that they can and likely will retaliate with
indiscriminate murder in return.

I bet people are less fond of drones when they start to realize other
countries have them as well.


Lets just stick to defense and make the NATO civilian infrastructure secure
against cyber attack regardless of what making that technology public might
do for what some people insist we should consider enemies.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-09 Thread James A. Donald

On 2013-10-08 03:14, Phillip Hallam-Baker wrote:


Are you planning to publish your signing key or your decryption key?

Use of a key for one makes the other incompatible.�


Incorrect.  One's public key is always an elliptic point, one's private 
key is always a number.


Thus there is no reason in principle why one cannot use the same key (a 
number) for signing the messages you send, and decrypting the messages 
you receive.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Bill Frantz

On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:


On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be 
communicated to hardware and software designers so they know 
what kind of update schedule needs to be supported. Users of 
the resulting systems need to know that the crypto standards 
have a limited life so they can include update in their 
installation planning.



This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(


I think the situation is much more serious than this comment 
makes it appear. As professionals, we have an obligation to 
share our knowledge of the limits of our technology with the 
people who are depending on it. We know that all crypto 
standards which are 15 years old or older are obsolete, not 
recommended for current use, or outright dangerous. We don't 
know of any way to avoid this problem in the future.


I think the burden of proof is on the people who suggest that we 
only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


There are embedded systems that are impractical to update and 
have expected lifetimes measured in decades...
Many perfectly good PC's will stay on XP forever because even 
if there was the will and staff to upgrade, recent versions of 
Windows won't run on their hardware.

...
I'm afraid the reality is that we have to design for a world in 
which some devices will be running very old versions of code, 
speaking only very old versions of protocols, pretty much 
forever.  In such a world, newer devices either need to shield 
their older brethren from the sad realities or relegate them to 
low-risk activities by refusing to engage in high-risk 
transactions with them.  It's by no means clear how one would 
do this, but there really aren't any other realistic alternatives.


Users of this old equipment will need to make a security/cost 
tradeoff based on their requirements. The ham radio operator who 
is still running Windows 98 doesn't really concern me. (While 
his internet connected system might be a bot, the bot 
controllers will protect his computer from others, so his radio 
logs and radio firmware update files are probably safe.) I've 
already commented on the risks of sending Mailman passwords in 
the clear. Low value/low risk targets don' need titanium security.


The power plant which can be destroyed by a cyber attack, c.f. 
STUXNET, does concern me. Gas distribution systems do concern 
me. Banking transactions do concern me, particularly business 
accounts. (The recommendations for online business accounts 
include using a dedicated computer -- good advice.)


Perhaps the shortest limit on the lifetime of an embedded system 
is the security protocol, and not the hardware. If so, how do we 
as society deal with this limit.


Cheers -- Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-09 Thread Arnold Reinhold

On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote:

 On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make the 
 number of AES (and maybe KDF) rounds a negotiated parameter.  Given that x86 
 and ARM now have AES round instructions, other cipher algorithms are 
 unlikely to catch up in performance in the foreseeable future, even with an 
 higher AES round count. Increasing round count is effortless compared to 
 deploying a new cipher algorithm, even if provision is made the protocol. 
 Dropping such provisions (at least in new designs) simplifies everything and 
 simplicity is good for security.
 That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
 round instructions (or the round key computations instructions) have been 
 spiked to leak information in some non-obvious way - e.g., they cause a 
 power glitch that someone with the knowledge of what to look for can use to 
 read of some of the key bits.  The round key computation instructions 
 obviously have direct access to the actual key, while the round computation 
 instructions have access to the round keys, and with the standard round 
 function, given the round keys it's possible to determine the actual key.
 
 If, on the other hand, you use a cryptographically secure transformation from 
 key to round key, and avoid the built-in round key instructions entirely; and 
 you use CTR mode, so that the round computation instructions never see the 
 actual data; then AES round computation functions have nothing useful to leak 
 (unless they are leaking all their output, which would require a huge data 
 rate and would be easily noticed).  This also means that even if the round 
 instructions are implemented in software which allows for side-channel 
 attacks (i.e., it uses an optimized table instruction against which cache 
 attacks work), there's no useful data to *be* leaked.

At least in the Intel AES instruction set, the encode and decode instruction 
have access to each round key except the first. So they could leak that data, 
and it's at least conceivable that one can recover the first round key from 
later ones (perhaps this has been analyzed?).  Knowing all the round keys of 
course enables one to decode the data.  Still, this greatly increases the 
volume o data that must be leaked and if any instructions are currently 
spiked, it is most likely the round key generation assist instruction. One 
could include an IV in the initial hash, so no information could be gained 
about the key itself.  This would work with AES(KDF(key+IV)) as well, however. 

 
 So this is a mode for safely using possibly rigged hardware.  (Of course 
 there are many other ways the hardware could be rigged to work against you.  
 But with their intended use, hardware encryption instructions have a huge 
 target painted on them.)
 
 Of course, Keccak itself, in this mode, would have access to the real key.  
 However, it would at least for now be implemented in software, and it's 
 designed to be implementable without exposing side-channel attacks.
 
 There are two questions that need to be looked at:
 
 1.  Is AES used with (essentially) random round keys secure?  At what level 
 of security?  One would think so, but this needs to be looked at carefully.

The fact that the round keys are simply xor'd with the AES state at the start 
of each round suggest this likely secure. One would have to examine the KDF to 
make sure the there is nothing comparable to the related key attacks on the AES 
key set up. 

 2.  Is the performance acceptable?

The comparison would be to AES(KDF(key)). And in how many applications is key 
agility critical?

 
 BTW, some of the other SHA-3 proposals use the AES round transformation as a 
 primitive, so could also potentially be used in generating a secure round key 
 schedule.  That might (or might not) put security-critical information back 
 into the hardware instructions.
 
 If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
 implementation (the inner transformation that is the basis of each Keeccak 
 round) at some point.  Could that be used in a way that doesn't give it the 
 ability to leak critical information?
-- Jerry
 

Given multi-billion transistor CPU chips with no means to audit them, It's hard 
to see how they can be fully trusted.

Arnold Reinhold
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-09 Thread James A. Donald

On 2013-10-08 02:03, John Kelsey wrote:

Alongside Phillip's comments, I'll just point out that assassination of key 
people is a tactic that the US and Israel probably don't have any particular 
advantages in.  It isn't in our interests to encourage a worldwide tacit 
acceptance of that stuff.


Israel is famous for its competence in that area.


And if the US is famously incompetent, that is probably lack of will,
rather than lack of ability.  Drones give the US technological supremacy in
the selective removal of key people


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] P=NP on TV

2013-10-09 Thread Ray Dillinger
On 10/07/2013 05:28 PM, David Johnston wrote:

 We are led to believe that if it is shown that P = NP, we suddenly have a 
 break for all sorts of algorithms.
 So if P really does = NP, we can just assume P = NP and the breaks will make 
 themselves evident. They do not. Hence P != NP.

As I see it, it's still possible.  Proving that a solution exists does
not necessarily show you what the solution is or how to find it.  And
just because a solution is subexponential is no reason a priori to
suspect that it's cheaper than some known exponential solution for
any useful range of values.

So, to me, this is an example of TV getting it wrong.  If someone
ever proves P=NP, I expect that there will be thunderous excitement
in the math community, leaping hopes in the hearts of investors and
technologists, and then very careful explanations by the few people
who really understand the proof that it doesn't mean we can actually
do anything we couldn't do before.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-09 Thread Jerry Leichter
On Oct 8, 2013, at 6:10 PM, Arnold Reinhold wrote:

 
 On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote:
 
 On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make 
 the number of AES (and maybe KDF) rounds a negotiated parameter.  Given 
 that x86 and ARM now have AES round instructions, other cipher algorithms 
 are unlikely to catch up in performance in the foreseeable future, even 
 with an higher AES round count. Increasing round count is effortless 
 compared to deploying a new cipher algorithm, even if provision is made the 
 protocol. Dropping such provisions (at least in new designs) simplifies 
 everything and simplicity is good for security.
 That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
 round instructions (or the round key computations instructions) have been 
 spiked to leak information in some non-obvious way - e.g., they cause a 
 power glitch that someone with the knowledge of what to look for can use to 
 read of some of the key bits.  The round key computation instructions 
 obviously have direct access to the actual key, while the round computation 
 instructions have access to the round keys, and with the standard round 
 function, given the round keys it's possible to determine the actual key.
 
 If, on the other hand, you use a cryptographically secure transformation 
 from key to round key, and avoid the built-in round key instructions 
 entirely; and you use CTR mode, so that the round computation instructions 
 never see the actual data; then AES round computation functions have nothing 
 useful to leak (unless they are leaking all their output, which would 
 require a huge data rate and would be easily noticed).  This also means that 
 even if the round instructions are implemented in software which allows for 
 side-channel attacks (i.e., it uses an optimized table instruction against 
 which cache attacks work), there's no useful data to *be* leaked.
 
 At least in the Intel AES instruction set, the encode and decode instruction 
 have access to each round key except the first. So they could leak that data, 
 and it's at least conceivable that one can recover the first round key from 
 later ones (perhaps this has been analyzed?).  Knowing all the round keys of 
 course enables one to decode the data.  Still, this greatly increases the 
 volume o data that must be leaked and if any instructions are currently 
 spiked, it is most likely the round key generation assist instruction. One 
 could include an IV in the initial hash, so no information could be gained 
 about the key itself.  This would work with AES(KDF(key+IV)) as well, 
 however. 
 
 
 So this is a mode for safely using possibly rigged hardware.  (Of course 
 there are many other ways the hardware could be rigged to work against you.  
 But with their intended use, hardware encryption instructions have a huge 
 target painted on them.)
 
 Of course, Keccak itself, in this mode, would have access to the real key.  
 However, it would at least for now be implemented in software, and it's 
 designed to be implementable without exposing side-channel attacks.
 
 There are two questions that need to be looked at:
 
 1.  Is AES used with (essentially) random round keys secure?  At what level 
 of security?  One would think so, but this needs to be looked at carefully.
 
 The fact that the round keys are simply xor'd with the AES state at the start 
 of each round suggest this likely secure. One would have to examine the KDF 
 to make sure the there is nothing comparable to the related key attacks on 
 the AES key set up. 
 
 2.  Is the performance acceptable?
 
 The comparison would be to AES(KDF(key)). And in how many applications is key 
 agility critical?
 
 
 BTW, some of the other SHA-3 proposals use the AES round transformation as a 
 primitive, so could also potentially be used in generating a secure round 
 key schedule.  That might (or might not) put security-critical information 
 back into the hardware instructions.
 
 If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
 implementation (the inner transformation that is the basis of each Keeccak 
 round) at some point.  Could that be used in a way that doesn't give it the 
 ability to leak critical information?
   -- Jerry
 
 
 Given multi-billion transistor CPU chips with no means to audit them, It's 
 hard to see how they can be fully trusted.
 
 Arnold Reinhold

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-09 Thread Tim Newsham
 We are more vulnerable to widespread acceptance of these bad principles than
 almost anyone, ultimately,  But doing all these things has won larger budgets
 and temporary successes for specific people and agencies today, whereas
 the costs of all this will land on us all in the future.

The same could be (and has been) said about offensive cyber warfare.

 --John

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-09 Thread Phillip Hallam-Baker
On Tue, Oct 8, 2013 at 4:14 PM, James A. Donald jam...@echeque.com wrote:

  On 2013-10-08 03:14, Phillip Hallam-Baker wrote:


 Are you planning to publish your signing key or your decryption key?

  Use of a key for one makes the other incompatible.�


 Incorrect.  One's public key is always an elliptic point, one's private
 key is always a number.

 Thus there is no reason in principle why one cannot use the same key (a
 number) for signing the messages you send, and decrypting the messages you
 receive.


 The original author was proposing to use the same key for encryption and
signature which is a rather bad idea.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have 
 a limited life so they can include update in their installation planning.


 This would make a great April Fool's RFC, to go along with the classic evil 
 bit.  :-(


 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.

 I think the burden of proof is on the people who suggest that we only have to 
 do it right the next time and things will be perfect. These proofs should 
 address:

 New applications of old attacks.
 The fact that new attacks continue to be discovered.
 The existence of powerful actors subverting standards.
 The lack of a did right example to point to.
As one of the Do it right the first time people I'm going to argue
that the experience with TLS shows that extensibility doesn't work.

TLS was designed to support multiple ciphersuites. Unfortunately this
opened the door to downgrade attacks, and transitioning to protocol
versions that wouldn't do this was nontrivial. The ciphersuites
included all shared certain misfeatures, leading to the current
situation.

TLS is difficult to model: the use of key confirmation makes standard
security notions not applicable. The fact that every cipher suite is
indicated separately, rather than using generic composition makes
configuration painful.

In addition bugs in widely deployed TLS accelerators mean that the
claimed upgradability doesn't actually exist. Implementations can work
without supporting very necessary features. Had the designers of TLS
used a three-pass Diffie-Hellman protocol with encrypt-then-mac,
rather than the morass they came up with, we wouldn't be in this
situation today. TLS was not exploring new ground: it was well hoed
turf intellectually, and they still screwed it up.

Any standard is only an approximation to what is actually implemented.
Features that aren't used are likely to be skipped or implemented
incorrectly.

Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


 There are embedded systems that are impractical to update and have expected 
 lifetimes measured in decades...

 Many perfectly good PC's will stay on XP forever because even if there was 
 the will and staff to upgrade, recent versions of Windows won't run on their 
 hardware.
 ...

 I'm afraid the reality is that we have to design for a world in which some 
 devices will be running very old versions of code, speaking only very old 
 versions of protocols, pretty much forever.  In such a world, newer devices 
 either need to shield their older brethren from the sad realities or 
 relegate them to low-risk activities by refusing to engage in high-risk 
 transactions with them.  It's by no means clear how one would do this, but 
 there really aren't any other realistic alternatives.



-- 
Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread John Kelsey
On Oct 8, 2013, at 4:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

We know how to address one part of this problem--choose only algorithms whose 
design strength is large enough that there's not some relatively close by time 
when the algorithms will need to be swapped out.  That's not all that big a 
problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not Moore's Law.  
Really, even with 128-bit security level primitives, it will be a very long 
time until the brute-force attacks are a concern.  

This is actually one thing we're kind-of on the road to doing right in 
standards now--we're moving away from barely-strong-enough crypto and toward 
crypto that's going to be strong for a long time to come. 

Protocol attacks are harder, because while we can choose a key length, modulus 
size, or sponge capacity to support a known security level, it's not so easy to 
make sure that a protocol doesn't have some kind of attack in it.  

I think we've learned a lot about what can go wrong with protocols, and we can 
design them to be more ironclad than in the past, but we still can't guarantee 
we won't need to upgrade.  But I think this is an area that would be 
interesting to explore--what would need to happen in order to get more ironclad 
protocols?  A couple random thoughts:

a.  Layering secure protocols on top of one another might provide some 
redundancy, so that a flaw in one didn't undermine the security of the whole 
system.  

b.  There are some principles we can apply that will make protocols harder to 
attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is 
allowed to need change its execution path or timing based on the key or 
plaintext, every message includes a sequence number and the hash of the 
previous message, etc.  This won't eliminate protocol attacks, but will make 
them less common.

c.  We could try to treat at least some kinds of protocols more like crypto 
algorithms, and expect to have them widely vetted before use.  

What else?  

 ...
 Perhaps the shortest limit on the lifetime of an embedded system is the 
 security protocol, and not the hardware. If so, how do we as society deal 
 with this limit.

What we really need is some way to enforce protocol upgrades over time.  
Ideally, there would be some notion that if you support version X of the 
protocol, this meant that you would not support any version lower than, say, 
X-2.  But I'm not sure how practical that is.  

 Cheers -- Bill

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Bill Frantz

On 10/6/13 at 8:26 AM, crypto@gmail.com (John Kelsey) wrote:

If we can't select ciphersuites that we are sure we will always 
be comfortable with (for at least some forseeable lifetime) 
then we urgently need the ability to *stop* using them at some 
point.  The examples of MD5 and RC4 make that pretty clear.
Ceasing to use one particular encryption algorithm in something 
like SSL/TLS should be the easiest case--we don't have to worry 
about old signatures/certificates using the outdated algorithm 
or anything.  And yet we can't reliably do even that.


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be communicated 
to hardware and software designers so they know what kind of 
update schedule needs to be supported. Users of the resulting 
systems need to know that the crypto standards have a limited 
life so they can include update in their installation planning.


Cheers - Bill

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-08 Thread Bill Stewart



On Oct 4, 2013, at 12:20 PM, Ray Dillinger wrote:
 So, it seems that instead of AES256(key) the cipher in practice should be
 AES256(SHA256(key)).
 Is it not the case that (assuming SHA256 is not broken) this 
defines a cipher

 effectively immune to the related-key attack?


So you're essentially saying that AES would be stronger if it had a 
different key schedule?



At 08:59 AM 10/5/2013, Jerry Leichter wrote:

- If this is the primitive black box that does a single block
  encryption, you've about doubled the cost and you've got this
  messy combined thing you probably won't want to call a primitive.

You've doubled the cost of key scheduling, but usually that's more like
one-time than per-packet.  If the hash is complex, you might have
also doubled the cost of silicon for embedded apps, which is more of a problem.


- If you say well, I'll take the overall key and replace it by
  its hash, you're defining a (probably good) protocol.  But
  once you're defining a protocol, you might as well just specify
  random keys and forget about the hash.


I'd expect that the point of related-key attacks is to find weaknesses
in key scheduling that are exposed by deliberately NOT using random keys
when the protocol's authors wanted you to use them.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-08 Thread Grégory Alvarez

Le 7 oct. 2013 à 17:45, Arnold Reinhold a...@me.com a écrit :

 other cipher algorithms are unlikely to catch up in performance in the 
 foreseeable future

You should take a look a this algorithm : http://eprint.iacr.org/2013/551.pdf

- The block size is variable and unknown from an attacker.
- The size of the key has no limit and is unknown from an attacker.
- The key size does not affect the algorithm speed (using a 256 bit key is the 
same as using a 1024 bit key).
- The algorithm is much faster than the average cryptographic function. 
Experimental test showed 600 Mo/s - 4 cycles/byte on an Intel Core 2 Duo P8600 
2.40GHz and 1,2 Go/s - 2 cycles/byte on an Intel i5-3210M 2.50GHz. Both CPU had 
only 2 cores.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Jerry Leichter
On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
 If we can't select ciphersuites that we are sure we will always be 
 comfortable with (for at least some forseeable lifetime) then we urgently 
 need the ability to *stop* using them at some point.  The examples of MD5 
 and RC4 make that pretty clear.
 Ceasing to use one particular encryption algorithm in something like SSL/TLS 
 should be the easiest case--we don't have to worry about old 
 signatures/certificates using the outdated algorithm or anything.  And yet 
 we can't reliably do even that.
 
 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have a 
 limited life so they can include update in their installation planning.
This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(

There are embedded systems that are impractical to update and have expected 
lifetimes measured in decades.  RFID chips include cryptography, are completely 
un-updatable, and have no real limit on their lifetimes - the percentage of the 
population represented by any given vintage of chips will drop continuously, 
but it will never go to zero.  We are rapidly entering a world in which devices 
with similar characteristics will, in sheer numbers, dominate the ecosystem - 
see the remote-controllable Phillips Hue light bulbs 
(http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:  
http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.  
The response from Phillips to that article says In developing Hue we have used 
industry standard encryption and authentication techni
 ques  [O]ur main advice to customers is that they take steps to ensure 
they are secured from malicious attacks at a network level.

Even in the PC world, where updates are a part of life, makers eventually stop 
producing them for older products.  Windows XP, as of about 10 months ago, was 
running on 1/4 of all PC's - many 100's of millions of PC's.  About 9 months 
from now, Microsoft will ship its final security update for XP.  Many perfectly 
good PC's will stay on XP forever because even if there was the will and staff 
to upgrade, recent versions of Windows won't run on their hardware.

In the Mac world, hardware in general tends to live longer, and there's plenty 
of hardware still running that can't run recent OS's.  Apple pretty much only 
does patches for at most 3 versions of the OS (with a new version roughly every 
year).  The Linux world isn't really much different except that it's less 
likely to drop support for old hardware, and because it tends to be used by a 
more techie audience who are more likely to upgrade, the percentages probably 
look better, at least for PC's.  (But there are antique versions of Linux 
hidden away in all kinds of appliances that no one ever upgrades.)

I'm afraid the reality is that we have to design for a world in which some 
devices will be running very old versions of code, speaking only very old 
versions of protocols, pretty much forever.  In such a world, newer devices 
either need to shield their older brethren from the sad realities or relegate 
them to low-risk activities by refusing to engage in high-risk transactions 
with them.  It's by no means clear how one would do this, but there really 
aren't any other realistic alternatives.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-08 Thread Hanno Böck
On Mon, 7 Oct 2013 10:54:50 +0200
Lay András and...@lay.hu wrote:

 I made a simple elliptic curve utility in command line PHP:
 
 https://github.com/LaySoft/ecc_phgp
 
 I know in the RSA, the sign is inverse operation of encrypt, so two
 different keypairs needs for encrypt and sign. In elliptic curve
 cryptography, the sign is not the inverse operation of encrypt, so my
 application use same keypair for encrypt and sign.
 
 Is this correct?

The very general answer: If it's not a big problem, it's always better
to separate encryption and signing keys - because you never know if
there are yet unknown interactions if you use the same key material in
different use cases.

You can even say this more general: It's always better to use one key
for one usage case. It doesn't hurt and it may prevent security issues.

-- 
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42


signature.asc
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Nico Williams
On Sat, Oct 05, 2013 at 09:29:05PM -0400, John Kelsey wrote:
 One thing that seems clear to me:  When you talk about algorithm
 flexibility in a protocol or product, most people think you are
 talking about the ability to add algorithms.  Really, you are talking
 more about the ability to *remove* algorithms.  We still have stuff
 using MD5 and RC4 (and we'll probably have stuff using dual ec drbg
 years from now) because while our standards have lots of options and
 it's usually easy to add new ones, it's very hard to take any away.  

Algorithm agility makes it possible to add and remove algorithms.  Both,
addition and removal, are made difficult by the fact that it is
difficult to update deployed code.  Removal is made much more difficult
still by the need to remain interoperable with legacy that has been
deployed and won't be updated fast enough.  I don't know what can be
done about this.  Auto-update is one part of the answer, but it can't
work for everything.

I like the idea of having a CRL-like (or OCSP-like?) system for
revoking algorithms.  This might -in some cases- do nothing more
than warn the user, or -in other cases- trigger auto-update checks.

But, really, legacy is a huge problem that we barely know how to
ameliorate a little.  It still seems likely that legacy code will
continue to remain deployed for much longer than the advertised
service lifetime of the same code (see XP, for example), and for at
least a few more product lifecycles (i.e., another 10-15 years
before we come up with a good solution).

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sat, Oct 5, 2013 at 7:36 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-10-04 23:57, Phillip Hallam-Baker wrote:

 Oh and it seems that someone has murdered the head of the IRG cyber
 effort. I condemn it without qualification.


 I endorse it without qualification.  The IRG are bad guys and need killing
 - all of them, every single one.

 War is an honorable profession, and is in our nature.  The lion does no
 wrong to kill the deer, and the warrior does no wrong to fight in a just
 war, for we are still killer apes.

 The problem with the NSA and NIST is not that they are doing warlike
 things, but that they are doing warlike things against their own people.


If people who purport to be on our side go round murdering their people
then they are going to go round murdering people on ours. We already have
Putin's group of thugs murdering folk with Polonium laced teapots, just so
that there can be no doubt as to the identity of the perpetrators.

We are not at war with Iran. I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.

Iran used to have a democracy, remember what happened to it? It was people
like the brothers Dulles who preferred a convenient dictator to a
democratic government that overthrew it with the help of a rent-a-mob
supplied by one Ayatollah Khomenei.


I believe that it was the Ultra-class signals intelligence that made the
operation possible and the string of CIA inspired coups that installed
dictators or pre-empted the emergence of democratic regimes in many other
countries until the mid 1970s. Which not coincidentally is the time that
mechanical cipher machines were being replaced by electronic.

I have had a rather closer view of your establishment than most. You have
retired four star generals suggesting that in the case of a cyber-attack
against critical infrastructure, the government should declare martial law
within hours. It is not hard to see where that would lead there are plenty
of US military types who would dishonor their uniforms with a coup at home,
I have met them.


My view is that we would all be rather safer if the NSA went completely
dark for a while, at least until there has been some accountability for the
crimes of the '00s and a full account of which coups the CIA backed, who
authorized them and why.

I have lived with terrorism all my life. My family was targeted by
terrorists that Rep King and Rudy Giuliani profess to wholeheartedly
support to this day. I am not concerned about the terrorists because they
obviously can't win. It is like the current idiocy in Congress, the
Democrats are bound to win because at the end of the day the effects of the
recession that the Republicans threaten to cause will be temporary while
universal health care will be permanent. The threatened harm is not great
enough to cause a change in policy. The only cases where terrorist tactics
have worked is where a small minority have been trying to suppress the
majority, as in Rhodesia or French occupied Spain during the Napoleonic
wars.

But when I see politicians passing laws to stop people voting, judges
deciding that the votes in a Presidential election cannot be counted and
all the other right wing antics taking place in the US at the moment, the
risk of a right wing fascist coup has to be taken seriously.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-07 Thread Ray Dillinger
On 10/04/2013 07:38 AM, Jerry Leichter wrote:
 On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. 

 If you're going to choose a single standard cryptographic algorithm, you have 
 to consider all the places it will be used.  ...

 It is worth noting that NSA seems to produce suites of algorithms optimized 
 for particular uses and targeted for different levels of security.  Maybe 
 it's time for a similar approach in public standards.

I believe you are right about this.  The problem with AES (etc) really  is that 
people
were trying to find *ONE* cryptographic primitive for use across a very wide 
range of
clients, many of which it is inappropriate for (too light for first-class or 
long-term
protection of data, too heavy for transient realtime signals on embedded 
low-power
chips).

I probably care less than most people about the low-power devices dealing with
transient realtime signals, and more about long-term data protection than most
people.  So, yeah, I'm annoyed that the standard algorithm is insufficient to
just *STOMP* the problem and instead requires occasional replacement, when 
*STOMP*
is well within my CPU capabilities, power budget, and timing requirements.  But
somebody else is probably annoyed that people want them to support AES when they
were barely able to do WEP on their tiny power budget fast enough to be 
non-laggy.

These are problems that were never going to have a common solution.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread James A. Donald

On 2013-10-07 01:18, Phillip Hallam-Baker wrote:

We are not at war with Iran.


We are not exactly at peace with Iran either, but that is irrelevant, 
for presumably it was a Jew that did it, and Iran is at war with Jews.

(And they are none too keen on Christians, Bahais, or Zoroastrians either)


I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.


You may not be interested in war, but war is interested in you.   You 
can reasonably argue that we should not get involved in Israel's 
problems, but you should not complain about Israel getting involved in 
Israel's problems.



Iran used to have a democracy


Had a democracy where if you opposed Mohammad Mosaddegh you got murdered 
by Islamists.


Which, of course differs only in degree from our democracy, where (to 
get back to some slight relevance to cryptography) Ladar Levison gets 
put out of business for defending the fourth Amendment, and Pax gets put 
on a government blacklist that requires him to be fired and prohibits 
his business from being funded for tweeting disapproval of affirmative 
action for women in tech.


And similarly, if Hitler's Germany was supposedly not a democracy, why 
then was Roosevelt's America supposedly a democracy?


I oppose democracy because it typically results from, and leads to, 
government efforts to control the thoughts of the people.  There is not 
a large difference between our government requiring Pax to be fired, and 
Mohammad Mosaddegh murdering Haj-Ali Razmara.  Democracy also frequently 
results in large scale population replacement and ethnic cleansing, as 
for example Detroit and the Ivory Coast, as more expensive voters get 
laid off and cheaper voters get imported.


Mohammed Moasddegh loved democracy because he was successful and 
effective in murdering his opponents, and the Shah was unwilling or 
unable to murder the Shah's opponents.


And our government loves democracy because it can blacklist Pax and 
destroy Levison.


If you want murder and blacklists, population replacement and ethnic 
cleansing, support democracy.  If you don't want murder and blacklists, 
should have supported the Shah.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
I also found the argument here unconvincing.  After all, Keccak restricted to 
the set of strings of the form M|suffix reveals that it's input ends with 
suffix, which the original Keccak did not.  The problem is with the vague 
nature of no security problem.

To really get at this, I suspect you have to make some statement saying that 
your expectation about last |suffix| bits of the output is the same before and 
after you see the Keccak output, given your prior expectation about those bits. 
 But of course that's clearly the kind of statement you need *in general*:  
Keccak(Hello world) is some fixed value, and if you see it, your expectation 
that the input was Hello world will get close to 1 as you receive more output 
bits!

 In other words, I have to also make an argument about the nature of
 the suffix and how it can't have been chosen s.t. it influences the
 output in a useful way.
If the nature of the suffix and how it's chosen could affect Keccak's output in 
some predictable way, it would be secure.  Keccak's security is defined in 
terms of indistinguishability from a sponge with the same internal construction 
but a random round function (chosen from some appropriate class).  A random 
function won't show any particular interactions with chosen suffixes, so Keccak 
had better not either.

 I suspect I should agree with the conclusion, but I can't agree with
 the reasoning.
Yes, it would be nice to see this argued more fully.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
 One thing that seems clear to me:  When you talk about algorithm flexibility 
 in a protocol or product, most people think you are talking about the ability 
 to add algorithms.  Really, you are talking more about the ability to 
 *remove* algorithms.  We still have stuff using MD5 and RC4 (and we'll 
 probably have stuff using dual ec drbg years from now) because while our 
 standards have lots of options and it's usually easy to add new ones, it's 
 very hard to take any away.  
Q.  How did God create the world in only 6 days?
A.  No installed base.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 12:21 PM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 3, 2013, at 10:09 AM, Brian Gladman b...@gladman.plus.com wrote:
  Leaving aside the question of whether anyone weakened it, is it
  true that AES-256 provides comparable security to AES-128?
 
  I may be wrong about this, but if you are talking about the theoretical
  strength of AES-256, then I am not aware of any attacks against it that
  come even remotely close to reducing its effective key length to 128
  bits.  So my answer would be 'no'.
 There are *related key* attacks against full AES-192 and AES-256 with
 complexity  2^119.  http://eprint.iacr.org/2009/374 reports on improved
 versions of these attacks against *reduced round variants of AES-256; for
 a 10-round variant of AES-256 (the same number of rounds as AES-128), the
 attacks have complexity 2^45 (under a strong related sub-key attack).

 None of these attacks gain any advantage when applied to AES-128.

 As *practical attacks today*, these are of no interest - related key
 attacks only apply in rather unrealistic scenarios, even a 2^119 strength
 is way beyond any realistic attack, and no one would use a reduced-round
 version of AES-256.

 As a *theoretical checkpoint on the strength of AES* ... the abstract says
 the results raise[s] serious concern about the remaining safety margin
 offered by the AES family of cryptosystems.

 The contact author on this paper, BTW, is Adi Shamir.


Shamir said that he would like to see AES detuned for speed and extra
rounds added during the RSA conf cryptographers panel a couple of years
back.

That is the main incentive for using AES 256 over 128. Nobody is going to
be breaking AES 128 by brute force so key size above that is irrelevant but
you do get the extra rounds.


Saving symmetric key bits does not really bother me as pretty much any
mechanism I use to derive them is going to give me plenty. I am even
starting to think that maybe we should start using the NSA checksum
approach.

Incidentally, that checksum could be explained simply by padding prepping
an EC encrypted session key. PKCS#1 has similar stuff to ensure that there
is no known plaintext in there. Using the encryption algorithm instead of
the OAEP hash function makes much better sense.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-07 Thread John Kelsey
On Oct 6, 2013, at 6:29 PM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
 I also found the argument here unconvincing.  After all, Keccak restricted to 
 the set of strings of the form M|suffix reveals that it's input ends with 
 suffix, which the original Keccak did not.  The problem is with the vague 
 nature of no security problem.

They are talking about the change to their padding scheme, in which between 2 
and 4 bits of extra padding are added to the padding scheme that was originally 
proposed for SHA3.  A hash function that works by processing r bits at a time 
till the whole message is processed (every hash function I can think of works 
like this) has to have a padding scheme, so that when someone tries to hash 
some message that's not a multiple of r bits long, the message gets padded out 
to r bits.  

The only security relevance of the padding scheme is that it has to be 
invertible--given the padded string, there must always be exactly one input 
string that could have led to that padded string.  If it isn't invertible, then 
the padding scheme would introduce collisions.  For example, if your padding 
scheme was append zeros until you get the message out to a multiple of r 
bits, I could get collisions on your hash function by taking some message that 
was not a multple of r bits, and appending one or more zeros to it.  Just 
appending a single one bit, followed by as many zeros as are needed to get to a 
multiple of r bits makes a fine padding scheme, so long as the one bit is 
appended to *every* message, even those which start out a multiple of r bits 
long.  

The Keccak team proposed adding a few extra bits to their padding, to add 
support for tree hashing and to distinguish different fixed-length hash 
functions that used the same capacity internally.  They really just need to 
argue that they haven't somehow broken the padding so that it is no longer 
invertible

They're making this argument by pointing out that you could simply stick the 
fixed extra padding bits on the end of a message you processed with the 
original Keccak spec, and you would get the same result as what they are doing. 
 So if there is any problem introduced by sticking those extra bits at the end 
of the message before doing the old padding scheme, an attacker could have 
caused that same problem on the original Keccak by just sticking those extra 
bits on the end of messages before processing them with Keccak.  

-- Jerry

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Ray Dillinger
Is it just me, or does the government really have absolutely no one
with any sense of irony?  Nor, increasingly, anyone with a sense of
shame?

I have to ask, because after directly suborning the cyber security
of most of the world including the USA, and destroying the credibility
of just about every agency who could otherwise help maintain it, the
NSA kicked off National Cyber Security Awareness Month on the first
of October this year.

http://blog.sfgate.com/hottopics/2013/10/01/as-government-shuts-down-nsa-excitedly-announces-national-cyber-security-awareness-month/

[Slow Clap]  Ten out of ten for audacity, wouldn't you say?

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 6, 2013, at 11:41 PM, John Kelsey wrote:
 ...They're making this argument by pointing out that you could simply stick 
 the fixed extra padding bits on the end of a message you processed with the 
 original Keccak spec, and you would get the same result as what they are 
 doing.  So if there is any problem introduced by sticking those extra bits at 
 the end of the message before doing the old padding scheme, an attacker could 
 have caused that same problem on the original Keccak by just sticking those 
 extra bits on the end of messages before processing them with Keccak.  
This style of argument makes sense for encryption functions, where it's a 
chosen plaintext attack, since the goal is to determine the key.  But it makes 
no sense for a hash function:  If the attacker can specify something about the 
input, he ... knows something about the input!  You need to argue that he knows 
*no more than that* after looking at the output than he did before.

While both Ben and I are convinced that in fact the suffix can't affect 
security, the *specific wording* doesn't really give an argument for why.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 20:00, John Kelsey wrote:

http://keccak.noekeon.org/yes_this_is_keccak.html



Seems the Keccac people take the position that Keccak is actually a way 
of creating hash functions, rather than a specific hash function - the 
created functions may be ridiculously strong, or far too weak.


It also seems NIST think a competition is a way of creating a hash 
function - rather than a way of competitively choosing one.



I didn't follow the competition, but I don't actually see anybody being 
right here. NIST is probably just being incompetent, not malicious, but 
their detractors have a point too.


The problem is that the competition was, or should have been, for a 
single [1] hash function, not for a way of creating hash functions - and 
in my opinion only a single actual hash function based on Keccak should 
have been allowed to enter.


I think that's what actually happened, and an actual function was 
entered. The Keccac people changed it a little between rounds, as is 
allowed, but by the final round the entries should all have been fixed 
in stone.


With that in mind, there is no way the hash which won the competition 
should be changed by NIST.


If NIST do start changing things - whatever the motive  - the benefits 
of openness and fairness of the competition are lost, as is the analysis 
done on the entries.


If NIST do start changing things, then nobody can say SHA-3 was chosen 
by an open and fair competition.


And if that didn't happen, if a specific and well-defined hash was not 
entered, the competition was not open in the first place.




Now in the new SHA-4 competition TBA soon, an actual specific hash 
function based on Keccac may well be the winner - but then what is 
adopted will be what was actually entered.


The work done (for free!) by analysts during the competition will not be 
wasted on a changed specification.




[1] it should have been for a _single_ hash function, not two or 3 
functions with different parameters. I know the two-security-level model 
is popular with NSA and the like, probably for historical export 
reasons, but it really doesn't make any sense for the consumer.


It is possible to make cryptography which we think is resistant to all 
possible/likely attacks. That is what the consumer wants and needs. One 
cryptography which he can trust in, resistant against both his baby 
sister and the NSA.


We can do that. In most cases that sort of cryptography doesn't take 
even measurable resources.



The sole and minimal benefit of having two functions (from a single 
family) - cheaper computation for low power devices, there are no other 
real benefits - is lost in the roar of the costs.


There is a case for having two or more systems - monocultures are 
brittle against failures, and like the Irish Potato Famine a single 
failure can be catastrophic - but two systems in the same family do not 
give the best protection against that.


The disadvantages of having two or more hash functions? For a start, 
people don't know what they are getting. They don't know how secure it 
will be - are you going to tell users whether they are using HASH_lite 
rather than HASH_strong every time? And expect them to understand that?


Second, most devices have to have different software for each function - 
and they have to be able to accept data and operations for more than one 
function as well, which opens up potential security holes.


I could go on, but I hope you get the point already.

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 00:09, Dan Kaminsky wrote:

Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
  There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.


That may once have been mostly true, but no longer - now it's mostly false.

In almost every case nowadays the speed at which a device computes a 
SHA-3 hash doesn't matter at all. Devices are either way fast enough, or 
they can't use SHA-3 at all, whether or not it is made 50% faster.




(Now, whether my theory that we stuck with MD5 over SHA1 because
variable field lengths are harder to parse in C -- that's an open
question to say the least.)


:)

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Faré
On Sun, Oct 6, 2013 at 9:10 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 I am even
 starting to think that maybe we should start using the NSA checksum
 approach.

 Incidentally, that checksum could be explained simply by padding prepping an
 EC encrypted session key. PKCS#1 has similar stuff to ensure that there is
 no known plaintext in there. Using the encryption algorithm instead of the
 OAEP hash function makes much better sense.

Wait, am I misunderstanding, or is the NSA recommending that people
checksum by leaving behind the key encrypted with a backdoor the NSA
and the NSA only can read? Wow.

—♯ƒ • François-René ÐVB Rideau •ReflectionCybernethics• http://fare.tunes.org
Few facts are more revealing than the direction people travel
when they vote with their feet. — Don Boudreaux http://bit.ly/afZgx2
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

  1   2   3   4   5   6   7   8   9   10   >