Re: [Cryptography] TLS2

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 11:49:49AM +0300, ianG wrote:

On 30/09/13 11:02 AM, Adam Back wrote:

no ASN.1, and no X.509 [...], encrypt and then MAC only, no non-forward
secret ciphersuites, no baked in key length limits [...] support
soft-hosting [...] Add TOFO for self-signed keys.  


Personally, I'd do it over UDP (and swing for an IP allocation).  


I think lack of soft-hosting support in TLS was a mistake - its another
reason not to turn on SSL (IPv4 addresses are scarce and can only host one
SSL domain per IP#, that means it costs more, or a small hosting company can
only host a limited number of domains, and so has to charge more for SSL):
and I dont see why its a cost worth avoiding to include the domain in the
client hello.  There's an RFC for how to retrofit softhost support via
client-hello into TLS but its not deployed AFAIK.

The other approach is to bump up security - ie start with HTTP, then switch
to TLS, however that is generally a bad direction as it invites attacks on
the unauthenticated destination redirected to.  I know there is also another
direction to indicate via certification that a domain should be TLS only,
but as a friend of mine was saying 10 years ago, its past time to deprecate
HTTP in favor of TLS.

Both client and server must have a PP key pair.  


Well clearly passwords are bad and near the end of their life-time with GPU
advances, and even amplified password authenticated key exchanges like EKE
have a (so far) unavoidable design requirement to have the server store
something offline grindable, which could be key stretched, but thats it. 
PBKDF2 + current GPU or ASIC farms = game over for passwords.


However whether its password based or challenge response based, I think we
ought to address the phish problem for which actually EKE was after all
designed for (in 1992 (EKE) and 1993 (password augmented EKE)).  Maybe as
its been 20 years we might actually do it.  (Seems to be the general rule of
thumb for must-use crypto inventions that it takes 20 years until the
security software industry even tries).  Of course patents ony slow it down. 
And coincidentally the original AKE patent expired last month.  (And I

somehow doubt Lucent, the holder, got any licensing revenue worth speaking
about between 1993 and now).

By pinning the EKE or AKE to the domain, I mean that there should be no MITM
that can repurpose a challenge based on phish at telecon.com to telecom.com,
because the browser enforces that EKE/AKE challenge reponse includes the
domain connected to is combined in a non-malleable way into the response. 
(EKE/AKE are anyway immune to offline grinding of the exchanged messags.)


Clearly you want to tie that also back to the domains TLS auth key,
otherwise you just invite DNS exploits which are trivial across ARP
poisoning, DNS cache-poisoning, TCP/UDP session hijack etc depending on the
network scenario.

And the browser vendors need in the case of passwords/AKE to include a
secure UI that can not be indistinguishably pasted over by carefully aligned
javascript popups.

(The other defense with securid and their clones can help prop up
AKE/passwords.)


Both, used every time to start the session, both sides authenticating each
other at the key level.  Any question of certificates is kicked out to a
higher application layer with key-based identities established.


While certs are a complexity it would be nice to avoid, I think that
reference to something external and bloated can be a problem, as then like
now you pollute an otherwise clean standard (nice simple BNF definition)
with something monstrous like ASN.1 and X.500 naming via X.509.  Maybe you
could profile something like openPGP though (it has its own crappy legacy
they're onto v5 key formats by now, and some of the earlier vs have their
own problems, eg fingerprint ambiguity arising from ambiguous encoding and
other issues, including too many variants, extra mandatory/optional
extensions.) Of course the issue with rejecting formats below a certain
level is the WoT is shrunk, and anyway the WoT is also not that widely used
outside of operational security/crypto industry circes.  That second
argument may push more towards SSH format keys which are by comparison
extremely simple, and are recently talking about introducing simple
certification as I recall.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Adam Back

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; encrypt and then MAC only, no non-forward secret
ciphersuites, no baked in key length limits.  I think I'd also vote for a
lot less modes and ciphers.  And probably non-NIST curves while we're at it. 
And support soft-hosting by sending the server domain in the client-hello. 
Add TOFO for self-signed keys.  Maybe base on PGP so you get web of trust,

thogh it started to get moderately complicated to even handle PGP
certificates.

Adam

On Sun, Sep 29, 2013 at 10:51:26AM +0300, ianG wrote:

On 28/09/13 20:07 PM, Stephen Farrell wrote:


b) is TLS1.3 (hopefully) and maybe some extensions for earlier
   versions of TLS as well



SSL/TLS is a history of fiddling around at the edges.  If there is to 
be any hope, start again.  Remember, we know so much more now.  Call 
it TLS2 if you want.


Start with a completely radical set of requirements.  Then make it 
so. There are a dozen people here who could do it.


Why not do the requirements, then ask for competing proposals?  
Choose 1.  It worked for NIST, and committees didn't work for anyone.


A competition for TLS2 would bring out the best and leave the 
bureaurats fuming and powerless.




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] three crypto lists - why and which

2013-09-30 Thread Adam Back

I am not sure if everyone is aware that there is also an unmoderated crypto
list, because I see old familiar names posting on the moderated crypto list
that I do not see posting on the unmoderated list.  The unmoderated list has
been running continuously (new posts in every day with no gaps) since mar
2010, with an interesting relatively low noise, and not firehose volume.

http://lists.randombit.net/mailman/listinfo/cryptography

The actual reason for the creation of that list was Perry's list went
through a hiatus when Perry stopped approving/forward posts eg

http://www.mail-archive.com/cryptography@metzdowd.com/

originally Nov 2009 - Mar 2010 (I presume the mar 2010 restart was motivated
by the creation of randombit list starting in the same month) but more
recently sep 2010 to may 2013 gap (minus traffic in aug 2011).

http://www.metzdowd.com/pipermail/cryptography/

I have no desire to pry into Perry's personal circumstances as to why this
huge gap happened, and he should be thanked for the significant moderation
effort he has put into create this low noise environment, but despite that
it is bad for cryptography if people's means of technical interaction
spuriously stops.  Perry mentioned recently that he has now backup
moderators, OK so good.

There is now also the cypherpunks list which has picked up, and covers a
wider mix of topics, censorship resistant technology ideas, forays into
ideology etc.  Moderation is even lower than randombit but no spam, noise
slightly higher but quite reasonable so far.  And there is now a domain name
that is not al-quaeda.net (seriously?  is that even funny?): cpunks.org. 

https://cpunks.org/pipermail/cypherpunks/ 


At least I enjoy it and see some familiar names posting last seen decade+
ago.

Anyway my reason for posting was threefold: a) make people aware of
randombit crypto list, b) rebooted cypherpunks list (*), but c) about how to
use randombit (unmoderated) and metzdowd.  


For my tastes sometimes Perry will cut off a discussion that I thought was
just warming up because I wanted to get into the detail, so I tend more
prefer the unmoderated list.  But its kind of a weird situaton because there
are people I want views and comments from who are on the metzdowd list who
as far as I know are not on the crypto list, and there's no convenient way
to migrate a conversation other than everyone subscribing to both.  Cc to
both perhaps works somewhat, I do that sometimes though as a general
principle it can be annoying when people Cc to too many lists.

Anyway thanks for your attention, back to the unmoderated (or moderated)
discussion!

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread Taral
On Sun, Sep 29, 2013 at 9:15 PM, Viktor Dukhovni
cryptogra...@dukhovni.org wrote:
 On Mon, Sep 30, 2013 at 10:07:14AM +1000, James A. Donald wrote:
 Therefore, everyone should use Curve25519, which we have every
 reason to believe is unbreakable.

 Superceded by the improved Curve1174.

Hardly. Elligator 2 works fine on curve25519.

-- 
Taral tar...@gmail.com
Please let me know if there's any further trouble I can give you.
-- Unknown
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Stephen Farrell


 On 29 Sep 2013, at 08:51, ianG i...@iang.org wrote:
 
 On 28/09/13 20:07 PM, Stephen Farrell wrote:
 
 b) is TLS1.3 (hopefully) and maybe some extensions for earlier
versions of TLS as well
 
 
 SSL/TLS is a history of fiddling around at the edges.  If there is to be any 
 hope, start again.  Remember, we know so much more now.  Call it TLS2 if you 
 want.
 
 Start with a completely radical set of requirements.  Then make it so. There 
 are a dozen people here who could do it.
 
 Why not do the requirements, then ask for competing proposals?  Choose 1.  It 
 worked for NIST, and committees didn't work for anyone.
 
 A competition for TLS2 would bring out the best and leave the bureaurats 
 fuming and powerless.
 

Sounds like a suggestion to make on the tls wg list. It might get some support, 
though I'd guess not everyone would want to do that

S

S

 
 iang
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread David Kuehling
 James == James A Donald jam...@echeque.com writes:

 Gregory Maxwell on the Tor-talk list has found that NIST approved
 curves, which is to say NSA approved curves, were not generated by the
 claimed procedure, which is a very strong indication that if you use
 NIST curves in your cryptography, NSA can read your encrypted data.

Just for completeness, I think this is the Mail you're referring to:

https://lists.torproject.org/pipermail/tor-talk/2013-September/029956.html

David
-- 
GnuPG public key: http://dvdkhlng.users.sourceforge.net/dk2.gpg
Fingerprint: B63B 6AF2 4EEB F033 46F7  7F1D 935E 6F08 E457 205F


pgprYk0hpT6pe.pgp
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread James A. Donald

On 2013-09-30 14:34, Viktor Dukhovni wrote:

On Mon, Sep 30, 2013 at 05:12:06AM +0200, Christoph Anton Mitterer wrote:


Not sure whether this has been pointed out / discussed here already (but
I guess Perry will reject my mail in case it has):

https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3

I call FUD.  If progress is to be made, fight the right fights.

The SHA-3 specification was not weakened, the blog confuses the
effective security of the algorithtm with the *capacity* of the
sponge construction.


SHA3 has been drastically weakened from the proposal that was submitted 
and cryptanalyzed:  See for example slides 43 and 44 of

https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE/edit



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread ianG

On 29/09/13 16:01 PM, Jerry Leichter wrote:

...e.g., according to Wikipedia, BATON is a block cipher with a key length of 
320 bits (160 of them checksum bits - I'd guess that this is an overt way for 
NSA to control who can use stolen equipment, as it will presumably refuse to 
operate at all with an invalid key). ...



I'm not really understanding the need for checksums on keys.  I can sort 
of see the battlefield requirement that comms equipment that is stolen 
can't then be utilized in either a direct sense (listening in) or 
re-sold to some other theater.


But it still doesn't quite work.  It seems antithetical to NSA's 
obsession with security at Suite A levels, if they are worried about the 
gear being snatched, they shouldn't have secret algorithms in them at all.


Using checksums also doesn't make sense, as once the checksum algorithm 
is recovered, the protection is dead.  I would have thought a HMAC 
approach would be better, but this then brings in the need for a 
centralised key distro approach.  Ok, so that is typically how 
battlefield codes work -- one set for everyone -- but I would have 
thought they'd have moved on from the delivery SPOF by now.





Cryptographic challenge:  If you have a sealed, tamper-proof box that implements, say, 
BATON, you can easily have it refuse to work if the key presented doesn't checksum 
correctly.  In fact, you'd likely have it destroy itself if presented with too many 
invalid keys.  NSA has always been really big about using such sealed modules for their 
own algorithms.  (The FIPS specs were clearly drafted by people who think in these terms. 
 If you're looking at them while trying to get software certified, many of the provisions 
look very peculiar.  OK, no one expects your software to be potted in epoxy (opaque 
in the ultraviolet - or was it infrared?); but they do expect various kinds of 
isolation that just affect the blocks on a picture of your software's implementation; 
they have no meaningful effect on security, which unlike hardware can't enforce any 
boundaries between the blocks.)

Anyway, this approach obviously depends on the ability of the hardware to 
resist attacks.  Can one design an algorithm which is inherently secure against 
such attacks?  For example, can one design an algorithm that's strong when used 
with valid keys but either outright fails (e.g., produces indexes into 
something like S-boxes that are out of range) or is easily invertible if used 
with invalid keys (e.g., has a key schedule that with invalid keys produces all 
0's after a certain small number of rounds)?  You'd need something akin to 
asymmetric cryptography to prevent anyone from reverse-engineering the checksum 
algorithm from the encryption algorithm, but I know of no fundamental reason 
why that couldn't be done.



It also seems a little overdone to do that in the algorithm.  Why not 
implement a kill switch with a separate parallel system?  If one is 
designing the hardware, then one has control over these things.


I guess then I really don't understand the threat they are trying to 
address here.


Any comments from the wider audience?

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread ianG

On 30/09/13 11:02 AM, Adam Back wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; encrypt and then MAC only, no non-forward
secret
ciphersuites, no baked in key length limits.  I think I'd also vote for a
lot less modes and ciphers.  And probably non-NIST curves while we're at
it. And support soft-hosting by sending the server domain in the
client-hello. Add TOFO for self-signed keys.  Maybe base on PGP so you
get web of trust,
thogh it started to get moderately complicated to even handle PGP
certificates.



Exactly.  By setting the *high-level* requirements, we can show how real 
software engineering is done.  In small teams.


Personally, I'd do it over UDP (and swing for an IP allocation).  So it 
incorporates the modes of TLS and UDP, both.  Network packets orderable 
but not ordered, responses have to identify their requests.


One cipher/mode == one AE.  One curve, if the users are polite they 
might get another in v2.1.


Both client and server must have a PP key pair.  Both, used every time 
to start the session, both sides authenticating each other at the key 
level.  Any question of certificates is kicked out to a higher 
application layer with key-based identities established.





Adam

On Sun, Sep 29, 2013 at 10:51:26AM +0300, ianG wrote:

On 28/09/13 20:07 PM, Stephen Farrell wrote:


b) is TLS1.3 (hopefully) and maybe some extensions for earlier
   versions of TLS as well



SSL/TLS is a history of fiddling around at the edges.  If there is to
be any hope, start again.  Remember, we know so much more now.  Call
it TLS2 if you want.

Start with a completely radical set of requirements.  Then make it so.
There are a dozen people here who could do it.

Why not do the requirements, then ask for competing proposals? Choose
1.  It worked for NIST, and committees didn't work for anyone.

A competition for TLS2 would bring out the best and leave the
bureaurats fuming and powerless.



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] TLS2

2013-09-30 Thread Ben Laurie
On 30 September 2013 10:47, Adam Back a...@cypherspace.org wrote:

 I think lack of soft-hosting support in TLS was a mistake - its another
 reason not to turn on SSL (IPv4 addresses are scarce and can only host one
 SSL domain per IP#, that means it costs more, or a small hosting company
 can
 only host a limited number of domains, and so has to charge more for SSL):
 and I dont see why its a cost worth avoiding to include the domain in the
 client hello.  There's an RFC for how to retrofit softhost support via
 client-hello into TLS but its not deployed AFAIK.


Boy, are you out of date:
http://en.wikipedia.org/wiki/Server_Name_Indication.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] encoding formats should not be committee'ized

2013-09-30 Thread ianG

On 29/09/13 16:13 PM, Jerry Leichter wrote:

On Sep 26, 2013, at 7:54 PM, Phillip Hallam-Baker wrote:

...[W]ho on earth thought DER encoding was necessary or anything other than 
incredible stupidity?...

It's standard.  :-)

We've been through two rounds of standard data interchange representations:

1.  Network connections are slow, memory is limited and expensive, we can't 
afford any extra overhead.  Hence DER.
2.  Network connections are fast, memory is cheap, we don't have to worry about 
them - toss in every last feature anyone could possibly want.  Hence XML.

Starting from opposite extremes, committees of standards experts managed to 
produce results that are too complex and too difficult for anyone to get right 
- and which in cryptographic contexts manage to share the same problem of 
multiple representations that make signing such a joy.

BTW, the *idea* behind DER isn't inherently bad - but the way it ended up is 
another story.  For a comparison, look at the encodings Knuth came up with in 
the TeX world.  Both dvi and pk files are extremely compact binary 
representations - but correct encoders and decoders for them are plentiful.  
(And it's not as if the Internet world hasn't come up with complex, difficult 
encodings when the need arose - see IDNA.)



Experience suggests that asking a standards committee to do the encoding 
format is a disaster.


I just looked at my code, which does something we call Wire, and it's 
700 loc.  Testing code is about a kloc I suppose.  Writing reference 
implementations is a piece of cake.


Why can't we just designate some big player to do it, and follow suit? 
Why argue in committee?




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread Viktor Dukhovni
On Mon, Sep 30, 2013 at 05:45:52PM +1000, James A. Donald wrote:

 On 2013-09-30 14:34, Viktor Dukhovni wrote:
 On Mon, Sep 30, 2013 at 05:12:06AM +0200, Christoph Anton Mitterer wrote:
 
 Not sure whether this has been pointed out / discussed here already (but
 I guess Perry will reject my mail in case it has):
 
 https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3
 I call FUD.  If progress is to be made, fight the right fights.
 
 The SHA-3 specification was not weakened, the blog confuses the
 effective security of the algorithtm with the *capacity* of the
 sponge construction.
 
 SHA3 has been drastically weakened from the proposal that was
 submitted and cryptanalyzed:  See for example slides 43 and 44 of
 https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE/edit

Have you read the SAKURA paper?

http://eprint.iacr.org/2013/231.pdf

In section 6.1 it describes 4 capacities for the SHA-2 drop-in
replacements, and in 6.2 these are simplified to two (and strengthened
for the truncated digests) i.e. the proposal chosen by NIST.

Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
might one admit the possibility that winning designs in contests
are at times quite conservative and that one can reasonably
standardize less conservative parameters that are more competitive
in software?

If SHA-3 is going to be used, it needs to offer some advantages
over SHA-2.  Good performance and built-in support for tree hashing
(ZFS, ...) are acceptable reasons to make the trade-off explained
on slides 34, 35 and 36 of:


https://ae.rsaconference.com/US13/connect/fileDownload/session/397EA47B1FB103F0B3E87D6163C7129E/CRYP-W23.pdf

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] psyops

2013-09-30 Thread David Honig




Bumber sticker:

Remember, the NSA is Backing You Up 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Bill Frantz

On 9/30/13 at 1:16 AM, i...@iang.org (ianG) wrote:


Any comments from the wider audience?


I talked with a park ranger who had used a high-precision GPS 
system which decoded the selective availability encrypted 
signal. Access to the device was very tightly controlled and it 
had a control-meta-shift-whoopie which erased the key should the 
device be in danger of being captured. And this was a relatively 
low security device.


Cheers - Bill

---
Bill Frantz|After all, if the conventional wisdom was 
working, the
408-356-8506   | rate of systems being compromised would be 
going down,

www.pwpconsult.com | wouldn't it? -- Marcus Ranum

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Hanno Böck
On Mon, 30 Sep 2013 11:47:37 +0200
Adam Back a...@cypherspace.org wrote:

 I think lack of soft-hosting support in TLS was a mistake - its
 another reason not to turn on SSL (IPv4 addresses are scarce and can
 only host one SSL domain per IP#, that means it costs more, or a
 small hosting company can only host a limited number of domains, and
 so has to charge more for SSL): and I dont see why its a cost worth
 avoiding to include the domain in the client hello.  There's an RFC
 for how to retrofit softhost support via client-hello into TLS but
 its not deployed AFAIK.

It's called SNI and it is widely deployed. All browsers and all
relevant web servers support it.

However, it has one drawback: It doesn't work with SSLv3, which means
it breaks every time browsers do a fallback on SSLv3. And they do quite
often, because they retry SSLv3 connects if TLS connections fail. Which
is also a security problem and allows downgrade attacks, but mainly it
means with weak internet connections you often get downgraded
connections.

-- 
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42


signature.asc
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread Christoph Anton Mitterer
On Mon, 2013-09-30 at 14:44 +, Viktor Dukhovni wrote:
 If SHA-3 is going to be used, it needs to offer some advantages
 over SHA-2.  Good performance and built-in support for tree hashing
 (ZFS, ...) are acceptable reasons to make the trade-off explained
 on slides 34, 35 and 36 of:

Well I think the most important advantage would be more security...
performance can only have far lower priority,... otherwise the whole
thing is rubbish.
Sure, SHA2 is far from being broken, but we've seen some first scratches
in SHA1 already... so it doesn't hurt if we have an algo which is based
on different principles, and has a high security margin.

I guess we've seen that in the most recent developments... better take
twice or three times than what we expect to be the reasonable security
margins, since we don't exactly know what NSA and friends is capable of.
Better try to combine different algos, for the same reason.


NIST has somewhat proven, that they can't be trusted, IMHO, regardless
of whether they just didn't notice what the NSA did, whether they
happily helped the agency, or whether they were forced so by law.
For us this doesn't matter.

To my understanding, performance wasn't the top-priority during the SHA3
competition, otherwise other algos might have been even better than
Keccack.
So this move now is highly disturbing and people should question, what
does NIST/NSA know what we don't.
Can you really exclude for sure, that they haven't found some weaknesses
which only apply at lower capacities?


I a way, that reminds me to ECC and the issues with the curves (not from
a mathematical POV, of course)... we have some (likely) fine
algorithm,... but the bad[0] guys standardise some parameters (like the
curves)...
At some point we smell the scandal and start wondering, if we wouldn't
be far better off with a different set of curves... but in practise it's
more or less too late then (well at least it's very problematic), since
all world is using that set of standardised curves.

It seems a bit as if we now to the same,... following NIST/NSA like
sheep.


Keccack seems to be a fine algorithm... perhaps it would be better the
scree SHA3 altogether an let the community decide upon a common set of
concrete algos (i.e. a community-SHA3) which is then to be standardised
by IETF, or whatever else.

An better take two or four times the capacity and/or bit-lenghts than
what we optimistically consider to be very secure.


Cheers,
Chris.

[0] In contrast to the evil guys, like terrorists and so on.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread John Kelsey
GOST was specified with S boxes that could be different for different 
applications, and you could choose s boxes to make GOST quite weak.  So that's 
one example.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Jerry Leichter
On Sep 30, 2013, at 4:16 AM, ianG i...@iang.org wrote:
 I'm not really understanding the need for checksums on keys.  I can sort of 
 see the battlefield requirement that comms equipment that is stolen can't 
 then be utilized in either a direct sense (listening in) or re-sold to some 
 other theater.
I'm *guessing* that this is what checksums are for, but I don't actually 
*know*.  (People used to wonder why NSA asked that DES keys be checksummed - 
the original IBM Lucifer algorithm used a full 64-bit key, while DES required 
parity bits on each byte.  On the one hand, this decreased the key size from 64 
to 56 bits; on the other, it turns out that under differential crypto attack, 
DES only provides about 56 bits of security anyway.  NSA, based on what we saw 
in the Clipper chip, seems to like running crypto algorithms tight:  Just as 
much effective security as the key size implies, exactly enough rounds to 
attain it, etc.  So *maybe* that was why they asked for 56-bit keys.  Or maybe 
they wanted to make brute force attacks easier for themselves.)

 But it still doesn't quite work.  It seems antithetical to NSA's obsession 
 with security at Suite A levels, if they are worried about the gear being 
 snatched, they shouldn't have secret algorithms in them at all.
This reminds me of the signature line someone used for years:  A boat in a 
harbor is safe, but that's not what boats are for.  In some cases you need to 
communicate securely with someone who's in harm's way, so any security device 
you give him is also in harm's way.  This is hardly a new problem.  Back in 
WW I, code books on ships had lead covers and anyone who had access to them had 
an obligation to see they were tossed overboard if the ship was about to fall 
into enemy hands.  Attackers tried very hard to get to the code book before it 
could be tossed.

Embassies need to be able to communicate at very high levels of security.  They 
are normally considered quite secure, but quiet attacks against them do occur.  
(There are some interesting stories of such things in Peter Wright's 
Spycatcher, which tells the story of his career in MI5.  If you haven't read it 
- get a copy right now.)  And of course people always look at the seizure of 
the US embassy in Iran.  I don't know if any crypto equipment was compromised, 
but it has been reported that the Iranians were able, by dint of a huge amount 
of manual labor, to piece back together shredded documents.  (This lead to an 
upgrade of shredders not just by the State Department but in the market at 
large, which came to demand cross-cut shredders, which cut the paper into 
longitudinal strips, but then cut across the strips to produce pieces no more 
than an inch or so long.  Those probably could be re-assembled using 
computerized techniques - originally developed to re-assemble old parchm
 ents like the Dead Sea Scrolls.)

Today, there are multiple layers of protection.  The equipment is designed to 
zero out any embedded keys if tampered with.  (This is common even in the 
commercial market for things like ATM's.)  A variety of techniques are used to 
make it hard to reverse-engineer the equipment.  (In fact, even FIPS 
certification of hardware requires some of these measures.)  At the extreme, 
operators of equipment are supposed to destroy it to prevent its capture.  
(There was a case a number of years back of a military plane that was forced by 
mechanical trouble to land in China.  A big question was how much of the 
equipment had been destroyed.  There are similar cases even today with ships, 
in which people on board take axes to the equipment.)

 Using checksums also doesn't make sense, as once the checksum algorithm is 
 recovered, the protection is dead.
The hardware is considered hard to break into, and one hopes it's usually 
destroyed.  The military, and apparently the NSA, believes in defense in depth. 
 If someone manages to get the checksum algorithm out, the probably have the 
crypto algorithm, too.

 I would have thought a HMAC approach would be better, but this then brings in 
 the need for a centralised key distro approach.
Why HMAC?  If you mean a keyed MAC ... it's not better.   But a true signature 
would mean that even completely breaking a captured device doesn't help you 
generate valid keys.  (Of course, you can modify the device - or a cloned copy 
- to skip the key signature check - hence my question as to whether one could 
create a crypto system that *inherently* had the properties that signed keys 
naively provide.)

  Ok, so that is typically how battlefield codes work -- one set for everyone 
 -- but I would have thought they'd have moved on from the delivery SPOF by 
 now.
In a hierarchical organization, centralized means of control are considered 
important.  There was an analysis of the (bad) cryptography in secure radios 
for police and fire departments, and it mainly relied on distribution of keys 
from a central source.

 ...It also seems a 

Re: [Cryptography] encoding formats should not be committee'ized

2013-09-30 Thread Mark Atwood
 Why can't we just designate some big player to do it, and follow suit? Why
 argue in committee?

Well, there are Protobufs, and there is Thrift, and there is
MessagePack, and there is Avro...

http://www.igvita.com/2011/08/01/protocol-buffers-avro-thrift-messagepack/

..m
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-30 Thread Salz, Rich
Bill said he wanted a piece of paper that could help verify his bank's 
certificate.  I claimed he's in the extreme minority who would do that and he 
asked for proof.

I can only, vaguely, recall that one of the East Coast big banks (or perhaps 
the only one that is left) at one point had a third-party cert for their online 
banking and that it encouraged phishing of their customers.  See also 
http://en.wikipedia.org/wiki/Phishing#cite_note-87 and 
http://en.wikipedia.org/wiki/Phishing#cite_note-88 which say simple things like 
show the right image don't work.

/r$

--  
Principal Security Engineer
Akamai Technology
Cambridge, MA
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread Peter Fairbrother

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.


Given that mostly security works (or it should), what's really important 
is where that security fails - and good enough security can drive out 
excellent security.


We can easily have excellent security in TLS (mk 2?) - the crypto part 
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE 
isn't say unbreakable for 10 years, far less for a lifetime.



We are only talking about security against an NSA-level opponent here. 
Is that significant?


Eg, Tor isn't robust against NSA-level opponents. Is OTR?


We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the 
information is either, and therefore what resources he might be willing 
or able to spend to get access to it - but we can make some crypto which 
(we think) is unbreakable.


No matter who or what resources, unbreakable. You can rely on the math.

And it doesn't usually cost any more than we are willing to pay - heck, 
the price is usually lost in the noise.


Zero crypto (theory) failures.

Ok, real-world systems won't ever meet that standard - but please don't 
hobble them with failure before they start trying.



With that assumption, the various i's you list become some sort of
average


Do you mean I-i's?

Ah, average, Which average might that be? Hmmm, independent 
distributions of two variables - are you going to average them, then 
multiply the averages?


That approximation doesn't actually work very well, mathematically 
speaking - as I'm sure you know.



This is why the security model that is provided is typically
one-size-fits-all, and the most successful products are typically the
ones with zero configuration and the best fit for the widest market.


I totally agree with zero configuration - and best fit - but you are 
missing the main point.


Would 1024-bit DHE give a reasonable expectation of say, ten years 
unbreakable by NSA?


If not, and Manning or Snowden wanted to use TLS, they would likely be 
busted.


Incidentally, would OTR pass that test?



-- Peter Fairbrother

(sorry for the sloppy late reply)

(I'm talking about TLS2, not a BCP - but the BCP is significant)
(how's the noggin? how's Waterlooville?? can I come visit sometime?)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread James A. Donald

On 2013-10-01 00:44, Viktor Dukhovni wrote:

Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
might one admit the possibility that winning designs in contests
are at times quite conservative and that one can reasonably
standardize less conservative parameters that are more competitive
in software?


less conservative means weaker.

Weaker in ways that the NSA has examined, and the people that chose the 
winning design have not.


Why then hold a contest and invite outside scrutiny in the first place.?

This is simply a brand new unexplained secret design emerging from the 
bowels of the NSA, which already gave us a variety of backdoored crypto.


The design process, the contest, the public examination, was a lie.

Therefore, the design is a lie.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread James A. Donald

On 2013-09-30 18:02, Adam Back wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; 


Granted that ASN.1 is incomprehensible and horrid, but, since there is 
an ASN.1 compiler that generates C code we should not need to comprehend it.



base on PGP so you get web of trust,


PGP web of trust does not scale.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Philipp Gühring
Hi,

What I personally think would be necessary for TLS2:

* At least one quantum-computing resistant algorithm which must be useable
either as replacement for DH+RSA+EC, or preferrably as additional
strength(double encryption) for the transition period.

* Zero-Knowledge password authentication (something like TLS-SRP), but
automatically re-encrypted in a normal server-authenticated TLS session
(so that it's still encrypted with the server if you used a weak password).

* Having client certificates be transmitted in the encrypted channel, not
in plaintext

Best regards,
Philipp 

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread John Kelsey
Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.  

it looks to me like there is no new information here, and no evidence of 
wrongdoing that I can see.  If there is a weak curve class of greater than 
about 2^{80} that NSA knew about 15 years ago and were sure nobody were ever 
going to find that weak curve class and exploit it to break classified 
communications protected by it, then they could have generated 2^{80} or so 
seeds to hit that weak curve class.  

What am I missing?  Do you have evidence that the NIST curves are cooked?  
Because the message I saw didn't provide anything like that.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread Tony Arcieri
On Mon, Sep 30, 2013 at 2:27 PM, James A. Donald jam...@echeque.com wrote:

 Granted that ASN.1 is incomprehensible and horrid, but, since there is an
 ASN.1 compiler that generates C code we should not need to comprehend it.


What about tools that want to comprehend it using something other than C
code?

The theoretical argument against something like this is the resulting C
code is a weird machine, i.e. ASN.1 cannot be understood by a pushdown
automaton or described by a context-free grammar.

See: http://www.cs.dartmouth.edu/~sergey/langsec/papers/langsec-tr.pdf

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] TLS2

2013-09-30 Thread Tony Arcieri
On Mon, Sep 30, 2013 at 1:02 AM, Adam Back a...@cypherspace.org wrote:

 If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
 like the base SSL protocol; encrypt and then MAC only, no non-forward
 secret
 ciphersuites, no baked in key length limits.  I think I'd also vote for a
 lot less modes and ciphers.  And probably non-NIST curves while we're at
 it.


Sounds like you want CurveCP?

http://curvecp.org/

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread Viktor Dukhovni
On Tue, Oct 01, 2013 at 07:21:03AM +1000, James A. Donald wrote:

 On 2013-10-01 00:44, Viktor Dukhovni wrote:
 Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
 might one admit the possibility that winning designs in contests
 are at times quite conservative and that one can reasonably
 standardize less conservative parameters that are more competitive
 in software?
 
 less conservative means weaker.

Weakening SHA3 to gain cryptanalytic advantage does not make much
sense.  SHA3 collisions or preimages even at 80-bit cost don't
provide anything interesting to a cryptanalyst, and MITM attackers
will attack much softer targets.

We know exactly why it was weakened.  The the proposed SHA3-256
digest gives 128 bits of security for both collisions and preimages.
Likewise the proposed SHA3-512 digest gives 256 bits of security
for both collisions and preimages.

 Weaker in ways that the NSA has examined, and the people that chose
 the winning design have not.

The lower capacity is not weaker in obscure ways.  If Keccak delivers
substantially less than c/2 security, then it should not have been
chosen at all.

If you believe that 128-bit preimage and collision resistance is
inadequate in combination with AES128, or 256-bit preimage and
collision resistance is inadequate in combination with AES256,
please explain.

 Why then hold a contest and invite outside scrutiny in the first place.?

The contest led to an excellent new hash function design.

 This is simply a brand new unexplained secret design emerging from
 the bowels of the NSA, which already gave us a variety of backdoored
 crypto.

Just because they're after you, doesn't mean they're controlling
your brain with radio waves.  Don't let FUD cloud your judgement.

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread arxlight
On 9/30/13 11:07 PM, Jerry Leichter wrote:
 On Sep 30, 2013, at 4:16 AM, ianG i...@iang.org wrote:

 But it still doesn't quite work.  It seems antithetical to NSA's obsession 
 with security at Suite A levels, if they are worried about the gear being 
 snatched, they shouldn't have secret algorithms in them at all.
 This reminds me of the signature line someone used for years:  A boat in a 
 harbor is safe, but that's not what boats are for.  In some cases you need to 
 communicate securely with someone who's in harm's way, so any security 
 device you give him is also in harm's way.  This is hardly a new problem.  
 Back in WW I, code books on ships had lead covers and anyone who had access 
 to them had an obligation to see they were tossed overboard if the ship was 
 about to fall into enemy hands.  Attackers tried very hard to get to the code 
 book before it could be tossed.
 
 Embassies need to be able to communicate at very high levels of security.  
 They are normally considered quite secure, but quiet attacks against them do 
 occur.  (There are some interesting stories of such things in Peter Wright's 
 Spycatcher, which tells the story of his career in MI5.  If you haven't read 
 it - get a copy right now.)  And of course people always look at the seizure 
 of the US embassy in Iran.  I don't know if any crypto equipment was 
 compromised, but it has been reported that the Iranians were able, by dint of 
 a huge amount of manual labor, to piece back together shredded documents.  
 (This lead to an upgrade of shredders not just by the State Department but in 
 the market at large, which came to demand cross-cut shredders, which cut the 
 paper into longitudinal strips, but then cut across the strips to produce 
 pieces no more than an inch or so long.  Those probably could be re-assembled 
 using computerized techniques - originally developed to re-assemble old parc
 hm
  ents like the Dead Sea Scrolls.)

Just to close the circle on this:

The Iranians used hundreds of carpet weavers (mostly women) to
reconstruct a good portion of the shredded documents which they
published (and I think continue to publish) eventually reaching 77
volumes of printed material in a series wonderfully named Documents
from the U.S. Espionage Den.

They did a remarkably good job, considering:

http://upload.wikimedia.org/wikipedia/commons/6/68/Espionage_den03_14.png

You can see a bunch of the covers via Google Books here:

http://books.google.com/books?q=editions:LCCN84193484

You could peruse the entire collection in a private (but not secret)
library of which I was once a member (outside the United States of
course) and I seem to remember that a London library had a good number
of the books too, despite the fact that the material was still
classified at the time (and I think still is?)

Perhaps it would be amusing to write to the old publisher and see if one
can still order the entire set:

Center for the Publication of the U.S. Espionage Den's Documents
P.O. Box 15815-3489
Teheran
Islamic Republic of Iran

Then again, you might find yourself unable to get on international
flights for a time after such a request, who knows.

On your speculation about crosscut shredding, you're right on the money.

DARPA ran a de-shredding challenge in 2011.  A team from San Fran
(All Your Shreds Are Belong To U.S.) won by substantially
reconstructing 5 of 7 puzzles.  DARPA has since yanked the content
there (or it has merely succumbed to bitrot/linkrot) but I recall it
being impressive.  The amount reconstructed from very high security
cross-shred was eye-opening.

Ah, found a mirror (on a site selling shredding services, of course):

http://www.datastorageinc.com/blog/?Tag=shredding

Lesson 1:  Don't use line-ruled paper.  Ever.

Lesson 2: Burn or pulp after you shred.

One imagines that substantial progress on the problem has been made
since the contest.

Ah, I see in writing this that there's a Wikipedia article on it too:

http://en.wikipedia.org/wiki/DARPA_Shredder_Challenge_2011

Which, in turn, lists the DARPA archive:

http://archive.darpa.mil/shredderchallenge/

As you might imagine, the events of 1979 caused quite a stir when it
came to the security of Department of State facilities.  What might
surprise you, however, would be to learn that most of this work was done
on improving time to destruction of classified material, and the means
to buy that time (read: Marines) for duty officers (read: intelligence
officers), and not actually improving security for diplomatic staff.
Those jarheads aren't for you folks, they are for the Classified.

-uni
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Sha3

2013-09-30 Thread John Kelsey
If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website, where we have all the slide presentations we have been giving 
over the last six months detailing our plans.  There is a lively discussion 
going on at the hash forum on the topic.  

This doesn't make as good a story as the new sha3 being some hell spawn cooked 
up in a basement at Fort Meade, but it does have the advantage that it has some 
connection to reality.

You might also want to look at what the Keccak designers said about what the 
capacities should be, to us (they put their slides up) and later to various 
crypto conferences.  

Or not.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-09-30 Thread Bill Frantz

On 9/30/13 at 2:07 PM, leich...@lrw.com (Jerry Leichter) wrote:

People used to wonder why NSA asked that DES keys be 
checksummed - the original IBM Lucifer algorithm used a full 
64-bit key, while DES required parity bits on each byte.  On 
the one hand, this decreased the key size from 64 to 56 bits; 
on the other, it turns out that under differential crypto 
attack, DES only provides about 56 bits of security anyway.  
NSA, based on what we saw in the Clipper chip, seems to like 
running crypto algorithms tight:  Just as much effective 
security as the key size implies, exactly enough rounds to 
attain it, etc.  So *maybe* that was why they asked for 56-bit 
keys.  Or maybe they wanted to make brute force attacks easier 
for themselves.


The effect of NSA's work with Lucifer to produce DES was:

  DES was protected against differential cryptanalysis without 
making this attack public.


  The key was shortened from 64 bits to 56 bits adding parity bits.

I think the security side of NSA won here. It is relatively easy 
to judge how much work a brute force attack will take. It is 
harder to analyze the effect of an unknown attack mode. DES 
users could make a informed judgment based on $$$, Moore's law, 
and the speed of DES.


Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread Watson Ladd
On Mon, Sep 30, 2013 at 2:21 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-10-01 00:44, Viktor Dukhovni wrote:

 Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
 might one admit the possibility that winning designs in contests
 are at times quite conservative and that one can reasonably
 standardize less conservative parameters that are more competitive
 in software?


 less conservative means weaker.

 Weaker in ways that the NSA has examined, and the people that chose the
 winning design have not.

This isn't true: Keccak's designers proposed a wide range of capacity
parameters for different environments.


 Why then hold a contest and invite outside scrutiny in the first place.?

 This is simply a brand new unexplained secret design emerging from the
 bowels of the NSA, which already gave us a variety of backdoored crypto.

No, it is the Keccak construction with a different rate and capacity.


 The design process, the contest, the public examination, was a lie.

 Therefore, the design is a lie.

I'm sorry, but the tradeoffs in capacity and their implications were part
of the Keccak submission from the beginning. During the entire process
commentators were questioning the difference between collision security and
preimage security, as it was clear that collisions kill a hash as dead as
preimages. This was a topic of debate on the SHA-3 list between DJB and
others, because DJB designed Cubehash to have the same tradeoff as the
design NIST is proposing to standardize.




 __**_
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/**mailman/listinfo/cryptographyhttp://www.metzdowd.com/mailman/listinfo/cryptography


Sincerely,
Watson
-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:24, John Kelsey wrote:

Maybe you should check your code first?  A couple nist people verified that the 
curves were generated by the described process when the questions about the 
curves first came out.


And a non NIST person verified that the curves were /not/ generated by 
the described process after the scandal broke.


The process that actually generates the curves looks like the end result 
of trying a trillion curves, until you hit one that has desirable 
properties, which desirable properties you are disinclined to tell 
anyone else.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:35, John Kelsey wrote:

Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.


The claimed procedure would have prevented the NSA from generating lots 
of curves till they found a bad one - one with weaknesses that the NSA 
knows how to detect, but which other people do not yet know how to detect.


That was the whole point of the claimed procedure.

As with SHA3, the NSA/NIST is deviating from its supposed procedures in 
ways that remove the security properties of those procedures.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-09-30 Thread James A. Donald

On 2013-10-01 04:22, Salz, Rich wrote:

designate some big player to do it, and follow suit?
Okay that data encoding scheme from Google protobufs or Facebook thrift.  Done.


We have a complie to generate C code from ASN.1 code

Google has a compiler to generate C code from protobufs source

The ASN.1 compiler is open source.  Google's compiler is not.

Further, google is unhappy that too-clever-code gives too-clever 
programmers too much power, and has prohibited its employees from ever 
doing something like protobufs again.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography