Re: [Cryptography] P=NP on TV

2013-10-07 Thread David Johnston

On 10/6/2013 12:17 PM, Salz, Rich wrote:


Last week, the American TV show Elementary (a TV who-done-it) was 
about the murder of two mathematicians who were working on proof of 
P=NP. The implications to crypto, and being able to "crack into 
servers" was covered. It was mostly accurate, up until the deux ex 
machine of the of the NSA hiding all the loose ends at the last 
minute. J  Fun and available at http://www.cbs.com/shows/elementary/video/




That gets to the heart of why I think P != NP.

We are led to believe that if it is shown that P = NP, we suddenly have 
a break for all sorts of algorithms.
So if P really does = NP, we can just assume P = NP and the breaks will 
make themselves evident. They do not. Hence P != NP.


Wheres my Field's Medal?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Universal security measures for crypto primitives

2013-10-07 Thread grarpamp
On Oct 7, 2013, at 1:43 AM, Peter Gutmann  wrote:
> Given the recent debate about security levels for different key sizes, the
> following paper by Lenstra, Kleinjung, and Thome may be of interest:
>
>  "Universal security from bits and mips to pools, lakes and beyond"
>  http://eprint.iacr.org/2013/635.pdf

On Mon, Oct 7, 2013 at 10:46 AM, Jerry Leichter  wrote:
> Then:  "...fundamental limits will let you make about 3*10^94 ~ 2^315 [bit] 
> flips
> and store about 2^315 bits

Then perhaps by the time that engine gets near 256 bits done crunching you,
any given secret holder will be either dead, too old / pardonable, or
society will
have moved on, thereby placing the secret into one of historical value only. It
would probably also cost about 2^315 bits to build and operate. Not many
100yr secrets out there besides grand conspiracies and whodunit's, and those
don't really need crypto. Might as well bump everything to 512 just to
be safe from
physics ;)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-07 Thread Dominik Schürmann
On 07.10.2013 10:54, Lay András wrote:

> I made a simple elliptic curve utility in command line PHP:
> 
> https://github.com/LaySoft/ecc_phgp
> 
> I know in the RSA, the sign is inverse operation of encrypt, so two
> different keypairs needs for encrypt and sign. In elliptic curve
> cryptography, the sign is not the inverse operation of encrypt, so my
> application use same keypair for encrypt and sign.
> 
> Is this correct?

Without looking at your specific implementation, I had a similar
question but regarding to ECIES combined with ECDSA. See
http://lists.randombit.net/pipermail/cryptography/2013-September/005353.html
for the answers.

Regards
Dominik



signature.asc
Description: OpenPGP digital signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sun, Oct 6, 2013 at 11:26 AM, John Kelsey  wrote:

> If we can't select ciphersuites that we are sure we will always be
> comfortable with (for at least some forseeable lifetime) then we urgently
> need the ability to *stop* using them at some point.  The examples of MD5
> and RC4 make that pretty clear.
>
> Ceasing to use one particular encryption algorithm in something like
> SSL/TLS should be the easiest case--we don't have to worry about old
> signatures/certificates using the outdated algorithm or anything.  And yet
> we can't reliably do even that.
>

I proposed a mechanism for that a long time back based on Rivest's notion
of a suicide note in SDSI.


The idea was that some group of cryptographers get together and create some
random numbers which they then keyshare amongst themselves so that there
are (say) 11 shares and a quorum of 5.

Let the key be k, if the algorithm being witnessed is AES then the value
AES(k) is published as the 'witness value for AES.

A device that ever sees the witness value for AES presented knows to stop
using it. It is in effect a 'suicide note' for AES.


Similar witness functions can be specified easily enough for hashes etc. We
already have the RSA factoring competition for RSA public key. In fact I
suggested to Burt Kaliski that they expand the program.

The cryptographic basis here is that there are only two cases where the
witness value will be released, either there is an expert consensus to stop
using AES (or whatever) or someone breaks AES.

The main downside is that there are many applications where you can't
tolerate fail-open. For example in the electricity and power system it is
more important to keep the system going than to preserve confidentiality.
An authenticity attack on the other hand might be cause...

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 6:04 PM, "Philipp Gühring"  wrote:
>> it makes no sense for a hash function:  If the attacker can specify
>> something about the input, he ... knows something about the input!  
> Yes, but since it's standardized, it's public knowledge, and just knowing
> the padding does not give you any other knowledge about the rest.
You're assuming what the argument is claiming to prove.

> What might be though is that Keccak could have some hidden internal
> backdoor function (which I think is very unlikely given what I read about
> it until now, I am just speaking hypothetically) that reduces the
> effective output size, if and only if the input has certain bits at the end.
Well, sure, such a thing *might* exist, though there's no (publicly) known 
technique for embedding such a thing in the kind of combinatorial mixing 
permutation that's at the base of Keccak and pretty much every hash function 
and block encryption function since DES - though the basic idea goes back to 
Shannon in the 1940's.

I will say that the Keccak analysis shows both the strength and the weakness of 
the current (public) state of the art.  Before differential cryptography, 
pretty much everything in this area was guesswork.  In the last 30-40 years 
(depending on whether you want to start with IBM's unpublished knowledge of the 
technique going back, according to Coppersmith, to 1974, or from Biham and 
Shamir's rediscovery and publication in the late 1980's), the basic idea has 
been expanded to a variety of related attacks, with very sophisticated modeling 
of exactly what you can expect to get from attacks under different 
circumstances.  The Keccak analysis goes through a whole bunch of these.  They 
make a pretty convincing argument that (a) no known attack can get anything 
much out of Keccak; (b) it's unlikely that there's an attack along the same 
general lines as currently know attacks that will work against it either.

The problem - and it's an open problem for the whole field - is that none of 
this gets at the question of whether there is some completely different kind of 
attack that would slice right through Keccak or AES or any particular 
algorithm, or any particular class of algorithms.  If you compare the situation 
to that in asymmetric crypto, our asymmetric algorithms are based on clean, 
simple mathematical structures about which we can prove a great deal, but that 
have buried within them particular problems that we believe, on fairly strong 
if hardly completely dispositive evidence, are hard.  For symmetric algorithms, 
we pretty much *rely* on the lack of any simple mathematical structure - which, 
in a Kolmogorov-complexity-style argument, just means there appear to be no 
short descriptions in tractable terms of what these transformations do.  For 
example, if you write the transformations down as Boolean formulas in CNF or 
DNF, the results are extremely large, with irregular, highly inter-twined 
terms.  Without that, various Boolean solvers would quickly cut them to ribbons.

In some sense, DC and related techniques say "OK, the complexity of the 
function itself is high, but if I look at the differentials, I can find some 
patterns that are simple enough to work with."

If there's an attack, it's likely to be based on something other than Boolean 
formulas written out in any form we currently work with, or anything based on 
differentials.  It's likely to come out of a representation entirely different 
from anything anyone has thought of.  You'd need that to do key recovery; you'd 
also need it to embed a back door (like a sensitivity to certain input 
patterns).  The fact that no one has found such a thing (publicly, at least) 
doesn't mean it can't exist; we just don't know what we don't know.  Surprising 
results like this have appeared before; in a sense, all of mathematics is about 
finding simple, tractable representations that turn impossible problems into 
soluble ones.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 12:45 PM, Ray Dillinger  wrote:
> Can we do anything ...[to make it possible to remove old algorithms]? If the 
> protocol allows correction (particularly remote or automated correction) of 
> an entity using a weak crypto primitive, that opens up a whole new set of 
> attacks on strong primitives.
> 
> We'd like the answer to be that people will decline to communicate with you 
> if you use a weak system,  but honestly when was the last time you had that 
> degree of choice in from whom you get exactly the content and services you 
> need?
> 
> Can we even make renegotiating the cipher suite inconveniently long or heavy 
> so defaulting weak becomes progressively more costly as more people default 
> strong? That opens up denial of service attacks, and besides it makes it 
> painful to be the first to default strong.
> 
> Can a check for a revoked signature for the cipher's security help? That 
> makes the CA into a point of control.
> 
> Anybody got a practical idea?
I don't see how there can be any solution to this.  Slow renegotiation doesn't 
affect users until it gets to the point where they feel the "something is 
broken"; at that point, the result to them is indistinguishable from just 
refusing connections with the old suites.  And of course what's broken is never 
*their* software, it's the other guy's - and given the alternative, they'll go 
to someone who isn't as insistent that their potential customers "do it the 
right way".  So you'll just set off a race to the bottom.

Revoking signatures ... well, just how effect are "bad signature" warnings 
today?  People learn - in fact, are often *taught* - to click through them.  If 
software refuses to let them do that, they'll look for other software.

Ultimately, I think you have to look at this as an economic issue.  The only 
reason to change your software is if the cost of changing is lower than the 
estimated future cost of *not* changing.  Most users (rightly) estimate that 
the chance of them losing much is very low.  You can change that estimate by 
imposing a cost on them, but in a world of competitive suppliers (and consumer 
protection laws) that's usually not practical.

It's actually interesting to consider the single counter-example out there;  
The iOS world (and to a slightly less degree, the OSX world).  Apple doesn't 
force iOS users to upgrade their existing hardware (and sometimes it's 
"obsolete" and isn't software-upgradeable) but in fact iOS users upgrade very 
quickly.  (iOS 7 exceeded 50% of installations within 7 days - a faster ramp 
than iOS 6.  Based on past patterns, iOS 7 will be in the high 90's in a fairly 
short time.)  No other software comes anywhere close to that.  Moving from iOS 
6 to iOS 7 is immensely more disruptive than moving to a new browser version 
(say) that drops support for a vulnerable encryption algorithm.  And yet huge 
numbers of people do it.  Clearly it's because of the new things in iOS 7 - and 
yet Microsoft still has a huge population of users on XP.

I think the real take-away here is that getting upgrades into the field is a 
technical problem only at the margins.  It has to do with people's attitudes in 
subtle ways that Apple has captured and others have not.  (Unanswerable 
question:  If the handset makers and the Telco vendors didn't make it so hard - 
often impossible - to upgrade, what would the market penetration numbers for 
different Android versions look like?)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-07 Thread Phillip Hallam-Baker
On Mon, Oct 7, 2013 at 4:54 AM, Lay András  wrote:

> Hi!
>
> I made a simple elliptic curve utility in command line PHP:
>
> https://github.com/LaySoft/ecc_phgp
>
> I know in the RSA, the sign is inverse operation of encrypt, so two
> different keypairs needs for encrypt and sign. In elliptic curve
> cryptography, the sign is not the inverse operation of encrypt, so my
> application use same keypair for encrypt and sign.
>
> Is this correct?
>

Are you planning to publish your signing key or your decryption key?

Use of a key for one makes the other incompatible.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Nico Williams
On Mon, Oct 07, 2013 at 11:45:56AM -0400, Arnold Reinhold wrote:
> If we are going to always use a construction like AES(KDF(key)), as
> Nico suggests, why not go further and use a KDF with variable length
> output like Keccak to replace the AES key schedule? And instead of

Note, btw, that Keccak is very much like a KDF, and KDFs generally
produce variable length output.  In fact, the HKDF construction
[RFC5869] is rather similar to the sponge concept underlying Keccak.  It
was the use of SHA-256 as a KDF [but not in an HKDF-like construction]
that I was objecting to.

> making provisions to drop in a different cipher should a weakness be
> discovered in AES,  make the number of AES (and maybe KDF) rounds a
> negotiated parameter.  Given that x86 and ARM now have AES round
> instructions, other cipher algorithms are unlikely to catch up in
> performance in the foreseeable future, even with an higher AES round
> count. Increasing round count is effortless compared to deploying a
> new cipher algorithm, even if provision is made the protocol. Dropping
> such provisions (at least in new designs) simplifies everything and
> simplicity is good for security.

As Jerry Leichter said, that's a really nice idea.  My IANAC concern
would be that there might be greatly diminished returns past some number
of rounds relative to the sorts of future attacks that that might
drastically weaken AES.  There are also issues with cipher modes to
worry about, so that on the whole I would still like to have algorithm
agility (though I don't think you were arguing against it!); but the
addition of a cipher strength knob might well be useful.

You're quite right that with CPU support for AES it will be very
difficult to justify switching to any other cipher...  There's always
3AES (a form of round count, but a layer up, and with much bigger step
sizes).  I suspect it's not AES we'll have problems with, but everything
else (asymmetric crypto and cipher modes most likely).

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 11:45 AM, Arnold Reinhold  wrote:
> If we are going to always use a construction like AES(KDF(key)), as Nico 
> suggests, why not go further and use a KDF with variable length output like 
> Keccak to replace the AES key schedule? And instead of making provisions to 
> drop in a different cipher should a weakness be discovered in AES,  make the 
> number of AES (and maybe KDF) rounds a negotiated parameter.  Given that x86 
> and ARM now have AES round instructions, other cipher algorithms are unlikely 
> to catch up in performance in the foreseeable future, even with an higher AES 
> round count. Increasing round count is effortless compared to deploying a new 
> cipher algorithm, even if provision is made the protocol. Dropping such 
> provisions (at least in new designs) simplifies everything and simplicity is 
> good for security.
That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
round instructions (or the round key computations instructions) have been 
"spiked" to leak information in some non-obvious way - e.g., they cause a power 
glitch that someone with the knowledge of what to look for can use to read of 
some of the key bits.  The round key computation instructions obviously have 
direct access to the actual key, while the round computation instructions have 
access to the round keys, and with the standard round function, given the round 
keys it's possible to determine the actual key.

If, on the other hand, you use a cryptographically secure transformation from 
key to round key, and avoid the built-in round key instructions entirely; and 
you use CTR mode, so that the round computation instructions never see the 
actual data; then AES round computation functions have nothing useful to leak 
(unless they are leaking all their output, which would require a huge data rate 
and would be easily noticed).  This also means that even if the round 
instructions are implemented in software which allows for side-channel attacks 
(i.e., it uses an optimized table instruction against which cache attacks 
work), there's no useful data to *be* leaked.

So this is a mode for safely using possibly rigged hardware.  (Of course there 
are many other ways the hardware could be rigged to work against you.  But with 
their intended use, hardware encryption instructions have a huge target painted 
on them.)

Of course, Keccak itself, in this mode, would have access to the real key.  
However, it would at least for now be implemented in software, and it's 
designed to be implementable without exposing side-channel attacks.

There are two questions that need to be looked at:

1.  Is AES used with (essentially) random round keys secure?  At what level of 
security?  One would think so, but this needs to be looked at carefully.
2.  Is the performance acceptable?

BTW, some of the other SHA-3 proposals use the AES round transformation as a 
primitive, so could also potentially be used in generating a secure round key 
schedule.  That might (or might not) put security-critical information back 
into the hardware instructions.

If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
implementation (the inner transformation that is the basis of each Keeccak 
round) at some point.  Could that be used in a way that doesn't give it the 
ability to leak critical information?
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] P=NP on TV

2013-10-07 Thread Lodewijk andré de la porte
So their research was stolen and they were assassinated by the NSA? Makes
sense. (Except for the NSA's lack of field agents! CIA involvement is
required)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Ray Dillinger


 Original message 
From: Jerry Leichter  
Date: 10/06/2013  15:35  (GMT-08:00) 
To: John Kelsey  
Cc: "cryptography@metzdowd.com List" ,Christoph 
Anton Mitterer ,james hughes 
,Dirk-Willem van Gulik  
Subject: Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was:
NIST about to weaken SHA3? 
 
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
  Really, you are talking more about the ability to *remove* algorithms.  We 
still have stuff using MD5 and RC4 (and we'll probably have stuff using dual ec 
drbg years from now) because while our standards have lots of options and it's 
usually easy to add new ones, it's very hard to take any away.

Can we do anything about that? If the protocol allows correction (particularly 
remote or automated correction) of an entity using a weak crypto primitive, 
that opens up a whole new set of attacks on strong primitives.

We'd like the answer to be that people will decline to communicate with you if 
you use a weak system,  but honestly when was the last time you had that degree 
of choice in from whom you get exactly the content and services you need?

Can we even make renegotiating the cipher suite inconveniently long or heavy so 
defaulting weak becomes progressively more costly as more people default 
strong? That opens up denial of service attacks, and besides it makes it 
painful to be the first to default strong.

Can a check for a revoked signature for the cipher's security help? That makes 
the CA into a point of control.

Anybody got a practical idea?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Iran and murder

2013-10-07 Thread John Kelsey
Alongside Phillip's comments, I'll just point out that assassination of key 
people is a tactic that the US and Israel probably don't have any particular 
advantages in.  It isn't in our interests to encourage a worldwide tacit 
acceptance of that stuff.  

I suspect a lot of the broad principles we have been pushing (assassinations 
and drone bombings can be done anywhere, cyber attacks against foreign 
countries are okay when you're not at war, spying on everyone everywhere is 
perfectly acceptable policy) are in the short-term interests of various 
powerful people and factions in the US, but are absolutely horrible ideas when 
you consider the long-term interests of the US.  We are a big, rich, relatively 
free country with lots of government scientists and engineers (especially when 
you consider funding) and tons of our economy and our society moving online.  
We are more vulnerable to widespread acceptance of these bad principles than 
almost anyone, ultimately,  But doing all these things has won larger budgets 
and temporary successes for specific people and agencies today, whereas the 
costs of all this will land on us all in the future.  

--John


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Politics - probably off topic here.

2013-10-07 Thread Ray Dillinger


 Original message 
From: Phillip Hallam-Baker  
Date: 10/06/2013  08:18  (GMT-08:00) 
To: "James A. Donald"  
Cc: cryptography@metzdowd.com 
Subject: Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: 
NIST about to weaken SHA3? 
 
Phillip Hallam-Baker wrote:

But when I see politicians passing laws to stop people voting, judges deciding 
that the votes in a Presidential election cannot be counted and all the other 
right wing antics taking place in the US at the moment, the risk of a right 
wing fascist coup has to be taken seriously. 

Well, yes.  That is my main concern as well.  The recent tactics of the 
Republican party are more an attack on the process of constitutional government 
than ordinary tactics of the sort that make sense within the process.  This 
sort of issue used to get solved with simple and relatively harmless horse 
trading and pork barrel deals where the minority members would "sell" their 
votes for something to bring back to their constituents. Shutting down the 
whole system is mad - and horribly damaging - compared to just taking the 
chance to bring home some pork.

And this concerns me more than it otherwise might because of the recent 
economic trouble we've been having.  When economies go bad is when fascist 
parties tend to come to power. Or more to the point in our case , when existing 
parties veer further in a fascist direction.  I'm seeing the golden dawn party 
in Greece gaining popularity that it could never have gotten when its economy 
was in better shape.  I see the recent elections in a few euro zone nations 
giving seats to far-right parties, and I just can't help starting to worry.

Most of the history I'm aware of regarding genocides and the emergence of 
dictatorships is that everywhere from Rwanda to Germany to Haiti,  they have 
always followed close on the heels of a particularly severe and long sustained 
economic crisis. 

I don't want the country I live in to become another example. And I don't want 
the world I live on to suffer through more examples. 


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Arnold Reinhold
If we are going to always use a construction like AES(KDF(key)), as Nico 
suggests, why not go further and use a KDF with variable length output like 
Keccak to replace the AES key schedule? And instead of making provisions to 
drop in a different cipher should a weakness be discovered in AES,  make the 
number of AES (and maybe KDF) rounds a negotiated parameter.  Given that x86 
and ARM now have AES round instructions, other cipher algorithms are unlikely 
to catch up in performance in the foreseeable future, even with an higher AES 
round count. Increasing round count is effortless compared to deploying a new 
cipher algorithm, even if provision is made the protocol. Dropping such 
provisions (at least in new designs) simplifies everything and simplicity is 
good for security.

Arnold Reinhold


On Sat, 5 Oct 2013 19:37, Nico Williams  wrote:
> On Fri, Oct 4, 2013 at 11:20 AM, Ray Dillinger  wrote:
>> So, it seems that instead of AES256(key) the cipher in practice should be
>> AES256(SHA256(key)).
> 
> More like: use a KDF and separate keys (obtained by applying a KDF to
> a root key) for separate but related purposes.
> 
> For example, if you have a full-duplex pipe with a single pre-shared
> secret key then: a) you should want separate keys for each direction
> (so you don't need a direction flag in the messages to deal with
> reflection attacks), b) you should derive a new set of keys for each
> "connection" if there are multiple connections between the same two
> peers.  And if you're using an AEAD-by-generic-composition cipher mode
> then you'll want separate keys for data authentication vs. data
> encryption.
> 
> The KDF might well be SHA256, but doesn't have to be.  Depending on
> characteristics of the original key you might need a more complex KDF
> (e.g., a PBKDF if the original is a human-memorable password).  This
> (and various other details about accepted KDF technology that I'm
> eliding) is the reason that you should want to think of a KDF rather
> than a hash function.
> 
> Suppose some day you want to switch to a cipher with a different key
> size.  If all you have to do is tell the KDF how large the key is,
> then it's easy, but if you have to change the KDF along with the
> cipher then you have more work to do, work that might or might not be
> easy.  Being able to treat the protocol elements as modular has
> significant advantages -and some pitfalls- over more monolythic
> constructions.
> 
> Nico
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Universal security measures for crypto primitives

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 1:43 AM, Peter Gutmann  wrote:
> Given the recent debate about security levels for different key sizes, the
> following paper by Lenstra, Kleinjung, and Thome may be of interest:
> 
>  "Universal security from bits and mips to pools, lakes and beyond"
>  http://eprint.iacr.org/2013/635.pdf  
> 
> From now on I think anyone who wants to argue about resistance to NSA attack
> should be required to rate their pet scheme in terms of
> neerslagverdampingsenergiebehoeftezekerheid (although I'm tempted to suggest
> the alternative tausendliterbierverdampfungssicherheit, it'd be too easy to
> cheat on that one).

While the paper is a nicely written joke, it does get at a fundamental point:  
We are rapidly approaching *physical* limits on cryptographically-relevant 
computations.

I've mentioned here in the past that I did a very rough, back-of-the envelope 
estimate of the ultimate limits on computation imposed by quantum mechanics.  I 
decided to ask a friend who actually knows the physics whether a better 
estimate was possible.  I'm still working to understand what he described, but 
here's the crux:  Suppose you want an answer to your computation within 100 
years.  Then your computations must fall in a sphere of space-time that has 
spatial radius 100 light years and time radius 100 years.  (This is a gross 
overestimate, but we're looking for an ultimate bound so why not keep the 
computation simple.)  Then:  "...fundamental limits will let you make about 
3*10^94 ~ 2^315 [bit] flips and store about 2^315 bits, in your century / 
light-century sphere."  Note that this gives you both a limit on computation 
(bit flips) and a limit on memory (total bits), so time/memory tradeoffs are 
accounted for.

This is based on the best current understanding we have of QM.  Granted, things 
can always change - but any theory that works even vaguely like the way QM 
works will impose *some* such limit.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Faré
On Sun, Oct 6, 2013 at 9:10 PM, Phillip Hallam-Baker  wrote:
> I am even
> starting to think that maybe we should start using the NSA checksum
> approach.
>
> Incidentally, that checksum could be explained simply by padding prepping an
> EC encrypted session key. PKCS#1 has similar stuff to ensure that there is
> no known plaintext in there. Using the encryption algorithm instead of the
> OAEP hash function makes much better sense.
>
Wait, am I misunderstanding, or is the NSA recommending that people
"checksum" by leaving behind the key encrypted with a backdoor the NSA
and the NSA only can read? Wow.

—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org
Few facts are more revealing than the direction people travel
when they vote with their feet. — Don Boudreaux http://bit.ly/afZgx2
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 00:09, Dan Kaminsky wrote:

Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
  There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.


That may once have been mostly true, but no longer - now it's mostly false.

In almost every case nowadays the speed at which a device computes a 
SHA-3 hash doesn't matter at all. Devices are either way fast enough, or 
they can't use SHA-3 at all, whether or not it is made 50% faster.




(Now, whether my theory that we stuck with MD5 over SHA1 because
variable field lengths are harder to parse in C -- that's an open
question to say the least.)


:)

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 20:00, John Kelsey wrote:

http://keccak.noekeon.org/yes_this_is_keccak.html



Seems the Keccac people take the position that Keccak is actually a way 
of creating hash functions, rather than a specific hash function - the 
created functions may be ridiculously strong, or far too weak.


It also seems NIST think a competition is a way of creating a hash 
function - rather than a way of competitively choosing one.



I didn't follow the competition, but I don't actually see anybody being 
right here. NIST is probably just being incompetent, not malicious, but 
their detractors have a point too.


The problem is that the competition was, or should have been, for a 
single [1] hash function, not for a way of creating hash functions - and 
in my opinion only a single actual hash function based on Keccak should 
have been allowed to enter.


I think that's what actually happened, and an actual function was 
entered. The Keccac people changed it a little between rounds, as is 
allowed, but by the final round the entries should all have been fixed 
in stone.


With that in mind, there is no way the hash which won the competition 
should be changed by NIST.


If NIST do start changing things - whatever the motive  - the benefits 
of openness and fairness of the competition are lost, as is the analysis 
done on the entries.


If NIST do start changing things, then nobody can say "SHA-3 was chosen 
by an open and fair competition".


And if that didn't happen, if a specific and well-defined hash was not 
entered, the competition was not open in the first place.




Now in the new SHA-4 competition TBA soon, an actual specific hash 
function based on Keccac may well be the winner - but then what is 
adopted will be what was actually entered.


The work done (for free!) by analysts during the competition will not be 
wasted on a changed specification.




[1] it should have been for a _single_ hash function, not two or 3 
functions with different parameters. I know the two-security-level model 
is popular with NSA and the like, probably for historical "export" 
reasons, but it really doesn't make any sense for the consumer.


It is possible to make cryptography which we think is resistant to all 
possible/likely attacks. That is what the consumer wants and needs. One 
cryptography which he can trust in, resistant against both his baby 
sister and the NSA.


We can do that. In most cases that sort of cryptography doesn't take 
even measurable resources.



The sole and minimal benefit of having two functions (from a single 
family) - cheaper computation for low power devices, there are no other 
real benefits - is lost in the roar of the costs.


There is a case for having two or more systems - monocultures are 
brittle against failures, and like the Irish Potato Famine a single 
failure can be catastrophic - but two systems in the same family do not 
give the best protection against that.


The disadvantages of having two or more hash functions? For a start, 
people don't know what they are getting. They don't know how secure it 
will be - are you going to tell users whether they are using HASH_lite 
rather than HASH_strong every time? And expect them to understand that?


Second, most devices have to have different software for each function - 
and they have to be able to accept data and operations for more than one 
function as well, which opens up potential security holes.


I could go on, but I hope you get the point already.

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] P=NP on TV

2013-10-07 Thread Salz, Rich
Last week, the American TV show Elementary (a TV who-done-it) was about the 
murder of two mathematicians who were working on proof of P=NP. The 
implications to crypto, and being able to "crack into servers" was covered. It 
was mostly accurate, up until the deux ex machine of the of the NSA hiding all 
the loose ends at the last minute.  :)  Fun and available at 
http://www.cbs.com/shows/elementary/video/


--
Principal Security Engineer
Akamai Technology
Cambridge, MA


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 6, 2013, at 11:41 PM, John Kelsey wrote:
> ...They're making this argument by pointing out that you could simply stick 
> the fixed extra padding bits on the end of a message you processed with the 
> original Keccak spec, and you would get the same result as what they are 
> doing.  So if there is any problem introduced by sticking those extra bits at 
> the end of the message before doing the old padding scheme, an attacker could 
> have caused that same problem on the original Keccak by just sticking those 
> extra bits on the end of messages before processing them with Keccak.  
This style of argument makes sense for encryption functions, where it's a 
chosen plaintext attack, since the goal is to determine the key.  But it makes 
no sense for a hash function:  If the attacker can specify something about the 
input, he ... knows something about the input!  You need to argue that he knows 
*no more than that* after looking at the output than he did before.

While both Ben and I are convinced that in fact the suffix can't "affect 
security", the *specific wording* doesn't really give an argument for why.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Ray Dillinger
Is it just me, or does the government really have absolutely no one
with any sense of irony?  Nor, increasingly, anyone with a sense of
shame?

I have to ask, because after directly suborning the cyber security
of most of the world including the USA, and destroying the credibility
of just about every agency who could otherwise help maintain it, the
NSA kicked off "National Cyber Security Awareness Month" on the first
of October this year.

http://blog.sfgate.com/hottopics/2013/10/01/as-government-shuts-down-nsa-excitedly-announces-national-cyber-security-awareness-month/

[Slow Clap]  Ten out of ten for audacity, wouldn't you say?

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Universal security measures for crypto primitives

2013-10-07 Thread Peter Gutmann
Given the recent debate about security levels for different key sizes, the
following paper by Lenstra, Kleinjung, and Thome may be of interest:

  "Universal security from bits and mips to pools, lakes and beyond"
  http://eprint.iacr.org/2013/635.pdf  

>From now on I think anyone who wants to argue about resistance to NSA attack
should be required to rate their pet scheme in terms of
neerslagverdampingsenergiebehoeftezekerheid (although I'm tempted to suggest
the alternative tausendliterbierverdampfungssicherheit, it'd be too easy to
cheat on that one).

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread John Kelsey
On Oct 6, 2013, at 6:29 PM, Jerry Leichter  wrote:

> On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
>> I have to take issue with this:
>> 
>> "The security is not reduced by adding these suffixes, as this is only
>> restricting the input space compared to the original Keccak. If there
>> is no security problem on Keccak(M), there is no security problem on
>> Keccak(M|suffix), as the latter is included in the former."
> I also found the argument here unconvincing.  After all, Keccak restricted to 
> the set of strings of the form M|suffix reveals that it's input ends with 
> "suffix", which the original Keccak did not.  The problem is with the vague 
> nature of "no security problem".

They are talking about the change to their padding scheme, in which between 2 
and 4 bits of extra padding are added to the padding scheme that was originally 
proposed for SHA3.  A hash function that works by processing r bits at a time 
till the whole message is processed (every hash function I can think of works 
like this) has to have a padding scheme, so that when someone tries to hash 
some message that's not a multiple of r bits long, the message gets padded out 
to r bits.  

The only security relevance of the padding scheme is that it has to be 
invertible--given the padded string, there must always be exactly one input 
string that could have led to that padded string.  If it isn't invertible, then 
the padding scheme would introduce collisions.  For example, if your padding 
scheme was "append zeros until you get the message out to a multiple of r 
bits," I could get collisions on your hash function by taking some message that 
was not a multple of r bits, and appending one or more zeros to it.  Just 
appending a single one bit, followed by as many zeros as are needed to get to a 
multiple of r bits makes a fine padding scheme, so long as the one bit is 
appended to *every* message, even those which start out a multiple of r bits 
long.  

The Keccak team proposed adding a few extra bits to their padding, to add 
support for tree hashing and to distinguish different fixed-length hash 
functions that used the same capacity internally.  They really just need to 
argue that they haven't somehow broken the padding so that it is no longer 
invertible

They're making this argument by pointing out that you could simply stick the 
fixed extra padding bits on the end of a message you processed with the 
original Keccak spec, and you would get the same result as what they are doing. 
 So if there is any problem introduced by sticking those extra bits at the end 
of the message before doing the old padding scheme, an attacker could have 
caused that same problem on the original Keccak by just sticking those extra 
bits on the end of messages before processing them with Keccak.  

>-- Jerry

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 12:21 PM, Jerry Leichter  wrote:

> On Oct 3, 2013, at 10:09 AM, Brian Gladman  wrote:
> >> Leaving aside the question of whether anyone "weakened" it, is it
> >> true that AES-256 provides comparable security to AES-128?
> >
> > I may be wrong about this, but if you are talking about the theoretical
> > strength of AES-256, then I am not aware of any attacks against it that
> > come even remotely close to reducing its effective key length to 128
> > bits.  So my answer would be 'no'.
> There are *related key* attacks against full AES-192 and AES-256 with
> complexity  2^119.  http://eprint.iacr.org/2009/374 reports on improved
> versions of these attacks against *reduced round variants" of AES-256; for
> a 10-round variant of AES-256 (the same number of rounds as AES-128), the
> attacks have complexity 2^45 (under a "strong related sub-key" attack).
>
> None of these attacks gain any advantage when applied to AES-128.
>
> As *practical attacks today*, these are of no interest - related key
> attacks only apply in rather unrealistic scenarios, even a 2^119 strength
> is way beyond any realistic attack, and no one would use a reduced-round
> version of AES-256.
>
> As a *theoretical checkpoint on the strength of AES* ... the abstract says
> the results "raise[s] serious concern about the remaining safety margin
> offered by the AES family of cryptosystems".
>
> The contact author on this paper, BTW, is Adi Shamir.


Shamir said that he would like to see AES detuned for speed and extra
rounds added during the RSA conf cryptographers panel a couple of years
back.

That is the main incentive for using AES 256 over 128. Nobody is going to
be breaking AES 128 by brute force so key size above that is irrelevant but
you do get the extra rounds.


Saving symmetric key bits does not really bother me as pretty much any
mechanism I use to derive them is going to give me plenty. I am even
starting to think that maybe we should start using the NSA checksum
approach.

Incidentally, that checksum could be explained simply by padding prepping
an EC encrypted session key. PKCS#1 has similar stuff to ensure that there
is no known plaintext in there. Using the encryption algorithm instead of
the OAEP hash function makes much better sense.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
> One thing that seems clear to me:  When you talk about algorithm flexibility 
> in a protocol or product, most people think you are talking about the ability 
> to add algorithms.  Really, you are talking more about the ability to 
> *remove* algorithms.  We still have stuff using MD5 and RC4 (and we'll 
> probably have stuff using dual ec drbg years from now) because while our 
> standards have lots of options and it's usually easy to add new ones, it's 
> very hard to take any away.  
Q.  How did God create the world in only 6 days?
A.  No installed base.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
> I have to take issue with this:
> 
> "The security is not reduced by adding these suffixes, as this is only
> restricting the input space compared to the original Keccak. If there
> is no security problem on Keccak(M), there is no security problem on
> Keccak(M|suffix), as the latter is included in the former."
I also found the argument here unconvincing.  After all, Keccak restricted to 
the set of strings of the form M|suffix reveals that it's input ends with 
"suffix", which the original Keccak did not.  The problem is with the vague 
nature of "no security problem".

To really get at this, I suspect you have to make some statement saying that 
your expectation about last |suffix| bits of the output is the same before and 
after you see the Keccak output, given your prior expectation about those bits. 
 But of course that's clearly the kind of statement you need *in general*:  
Keccak("Hello world") is some fixed value, and if you see it, your expectation 
that the input was "Hello world" will get close to 1 as you receive more output 
bits!

> In other words, I have to also make an argument about the nature of
> the suffix and how it can't have been chosen s.t. it influences the
> output in a useful way.
If the nature of the suffix and how it's chosen could affect Keccak's output in 
some predictable way, it would be secure.  Keccak's security is defined in 
terms of indistinguishability from a sponge with the same internal construction 
but a random round function (chosen from some appropriate class).  A random 
function won't show any particular interactions with chosen suffixes, so Keccak 
had better not either.

> I suspect I should agree with the conclusion, but I can't agree with
> the reasoning.
Yes, it would be nice to see this argued more fully.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread James A. Donald

On 2013-10-07 01:18, Phillip Hallam-Baker wrote:

We are not at war with Iran.


We are not exactly at peace with Iran either, but that is irrelevant, 
for presumably it was a Jew that did it, and Iran is at war with Jews.

(And they are none too keen on Christians, Bahais, or Zoroastrians either)


I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.


You may not be interested in war, but war is interested in you.   You 
can reasonably argue that we should not get involved in Israel's 
problems, but you should not complain about Israel getting involved in 
Israel's problems.



Iran used to have a democracy


Had a democracy where if you opposed Mohammad Mosaddegh you got murdered 
by Islamists.


Which, of course differs only in degree from our democracy, where (to 
get back to some slight relevance to cryptography) Ladar Levison gets 
put out of business for defending the fourth Amendment, and Pax gets put 
on a government blacklist that requires him to be fired and prohibits 
his business from being funded for tweeting disapproval of affirmative 
action for women in tech.


And similarly, if Hitler's Germany was supposedly not a democracy, why 
then was Roosevelt's America supposedly a democracy?


I oppose democracy because it typically results from, and leads to, 
government efforts to control the thoughts of the people.  There is not 
a large difference between our government requiring Pax to be fired, and 
Mohammad Mosaddegh murdering Haj-Ali Razmara.  Democracy also frequently 
results in large scale population replacement and ethnic cleansing, as 
for example Detroit and the Ivory Coast, as more expensive voters get 
laid off and cheaper voters get imported.


Mohammed Moasddegh loved democracy because he was successful and 
effective in murdering his opponents, and the Shah was unwilling or 
unable to murder the Shah's opponents.


And our government loves democracy because it can blacklist Pax and 
destroy Levison.


If you want murder and blacklists, population replacement and ethnic 
cleansing, support democracy.  If you don't want murder and blacklists, 
should have supported the Shah.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Ray Dillinger
On 10/04/2013 07:38 AM, Jerry Leichter wrote:
> On Oct 1, 2013, at 5:34 AM, Ray Dillinger  wrote:
>> What I don't understand here is why the process of selecting a standard 
>> algorithm for cryptographic primitives is so highly focused on speed. 

> If you're going to choose a single standard cryptographic algorithm, you have 
> to consider all the places it will be used.  ...

> It is worth noting that NSA seems to produce suites of algorithms optimized 
> for particular uses and targeted for different levels of security.  Maybe 
> it's time for a similar approach in public standards.

I believe you are right about this.  The problem with AES (etc) really  is that 
people
were trying to find *ONE* cryptographic primitive for use across a very wide 
range of
clients, many of which it is inappropriate for (too light for first-class or 
long-term
protection of data, too heavy for transient realtime signals on embedded 
low-power
chips).

I probably care less than most people about the low-power devices dealing with
transient realtime signals, and more about long-term data protection than most
people.  So, yeah, I'm annoyed that the "standard" algorithm is insufficient to
just *STOMP* the problem and instead requires occasional replacement, when 
*STOMP*
is well within my CPU capabilities, power budget, and timing requirements.  But
somebody else is probably annoyed that people want them to support AES when they
were barely able to do WEP on their tiny power budget fast enough to be 
non-laggy.

These are problems that were never going to have a common solution.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread John Kelsey
If we can't select ciphersuites that we are sure we will always be comfortable 
with (for at least some forseeable lifetime) then we urgently need the ability 
to *stop* using them at some point.  The examples of MD5 and RC4 make that 
pretty clear.  

Ceasing to use one particular encryption algorithm in something like SSL/TLS 
should be the easiest case--we don't have to worry about old 
signatures/certificates using the outdated algorithm or anything.  And yet we 
can't reliably do even that.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sat, Oct 5, 2013 at 7:36 PM, James A. Donald  wrote:

> On 2013-10-04 23:57, Phillip Hallam-Baker wrote:
>
>> Oh and it seems that someone has murdered the head of the IRG cyber
>> effort. I condemn it without qualification.
>>
>
> I endorse it without qualification.  The IRG are bad guys and need killing
> - all of them, every single one.
>
> War is an honorable profession, and is in our nature.  The lion does no
> wrong to kill the deer, and the warrior does no wrong to fight in a just
> war, for we are still killer apes.
>
> The problem with the NSA and NIST is not that they are doing warlike
> things, but that they are doing warlike things against their own people.
>
>
If people who purport to be on our side go round murdering their people
then they are going to go round murdering people on ours. We already have
Putin's group of thugs murdering folk with Polonium laced teapots, just so
that there can be no doubt as to the identity of the perpetrators.

We are not at war with Iran. I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.

Iran used to have a democracy, remember what happened to it? It was people
like the brothers Dulles who preferred a convenient dictator to a
democratic government that overthrew it with the help of a rent-a-mob
supplied by one Ayatollah Khomenei.


I believe that it was the Ultra-class signals intelligence that made the
operation possible and the string of CIA inspired coups that installed
dictators or pre-empted the emergence of democratic regimes in many other
countries until the mid 1970s. Which not coincidentally is the time that
mechanical cipher machines were being replaced by electronic.

I have had a rather closer view of your establishment than most. You have
retired four star generals suggesting that in the case of a cyber-attack
against critical infrastructure, the government should declare martial law
within hours. It is not hard to see where that would lead there are plenty
of US military types who would dishonor their uniforms with a coup at home,
I have met them.


My view is that we would all be rather safer if the NSA went completely
dark for a while, at least until there has been some accountability for the
crimes of the '00s and a full account of which coups the CIA backed, who
authorized them and why.

I have lived with terrorism all my life. My family was targeted by
terrorists that Rep King and Rudy Giuliani profess to wholeheartedly
support to this day. I am not concerned about the terrorists because they
obviously can't win. It is like the current idiocy in Congress, the
Democrats are bound to win because at the end of the day the effects of the
recession that the Republicans threaten to cause will be temporary while
universal health care will be permanent. The threatened harm is not great
enough to cause a change in policy. The only cases where terrorist tactics
have worked is where a small minority have been trying to suppress the
majority, as in Rhodesia or French occupied Spain during the Napoleonic
wars.

But when I see politicians passing laws to stop people voting, judges
deciding that the votes in a Presidential election cannot be counted and
all the other right wing antics taking place in the US at the moment, the
risk of a right wing fascist coup has to be taken seriously.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Nico Williams
On Sat, Oct 05, 2013 at 09:29:05PM -0400, John Kelsey wrote:
> One thing that seems clear to me:  When you talk about algorithm
> flexibility in a protocol or product, most people think you are
> talking about the ability to add algorithms.  Really, you are talking
> more about the ability to *remove* algorithms.  We still have stuff
> using MD5 and RC4 (and we'll probably have stuff using dual ec drbg
> years from now) because while our standards have lots of options and
> it's usually easy to add new ones, it's very hard to take any away.  

Algorithm agility makes it possible to add and remove algorithms.  Both,
addition and removal, are made difficult by the fact that it is
difficult to update deployed code.  Removal is made much more difficult
still by the need to remain interoperable with legacy that has been
deployed and won't be updated fast enough.  I don't know what can be
done about this.  Auto-update is one part of the answer, but it can't
work for everything.

I like the idea of having a CRL-like (or OCSP-like?) system for
"revoking" algorithms.  This might -in some cases- do nothing more
than warn the user, or -in other cases- trigger auto-update checks.

But, really, legacy is a huge problem that we barely know how to
ameliorate a little.  It still seems likely that legacy code will
continue to remain deployed for much longer than the advertised
service lifetime of the same code (see XP, for example), and for at
least a few more product lifecycles (i.e., another 10-15 years
before we come up with a good solution).

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] RSA-210 factored

2013-10-07 Thread RTF
Hi guys,

Thought this might (still) be of some interest:
http://www.mersenneforum.org/showpost.php?p=354259


rtf

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Elliptic curve question

2013-10-07 Thread Lay András
Hi!

I made a simple elliptic curve utility in command line PHP:

https://github.com/LaySoft/ecc_phgp

I know in the RSA, the sign is inverse operation of encrypt, so two
different keypairs needs for encrypt and sign. In elliptic curve
cryptography, the sign is not the inverse operation of encrypt, so my
application use same keypair for encrypt and sign.

Is this correct?

Thank you!

Lay
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography