Re: [Cryptography] Sha3

2013-10-07 Thread Ray Dillinger
On 10/04/2013 07:38 AM, Jerry Leichter wrote:
 On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. 

 If you're going to choose a single standard cryptographic algorithm, you have 
 to consider all the places it will be used.  ...

 It is worth noting that NSA seems to produce suites of algorithms optimized 
 for particular uses and targeted for different levels of security.  Maybe 
 it's time for a similar approach in public standards.

I believe you are right about this.  The problem with AES (etc) really  is that 
people
were trying to find *ONE* cryptographic primitive for use across a very wide 
range of
clients, many of which it is inappropriate for (too light for first-class or 
long-term
protection of data, too heavy for transient realtime signals on embedded 
low-power
chips).

I probably care less than most people about the low-power devices dealing with
transient realtime signals, and more about long-term data protection than most
people.  So, yeah, I'm annoyed that the standard algorithm is insufficient to
just *STOMP* the problem and instead requires occasional replacement, when 
*STOMP*
is well within my CPU capabilities, power budget, and timing requirements.  But
somebody else is probably annoyed that people want them to support AES when they
were barely able to do WEP on their tiny power budget fast enough to be 
non-laggy.

These are problems that were never going to have a common solution.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
I also found the argument here unconvincing.  After all, Keccak restricted to 
the set of strings of the form M|suffix reveals that it's input ends with 
suffix, which the original Keccak did not.  The problem is with the vague 
nature of no security problem.

To really get at this, I suspect you have to make some statement saying that 
your expectation about last |suffix| bits of the output is the same before and 
after you see the Keccak output, given your prior expectation about those bits. 
 But of course that's clearly the kind of statement you need *in general*:  
Keccak(Hello world) is some fixed value, and if you see it, your expectation 
that the input was Hello world will get close to 1 as you receive more output 
bits!

 In other words, I have to also make an argument about the nature of
 the suffix and how it can't have been chosen s.t. it influences the
 output in a useful way.
If the nature of the suffix and how it's chosen could affect Keccak's output in 
some predictable way, it would be secure.  Keccak's security is defined in 
terms of indistinguishability from a sponge with the same internal construction 
but a random round function (chosen from some appropriate class).  A random 
function won't show any particular interactions with chosen suffixes, so Keccak 
had better not either.

 I suspect I should agree with the conclusion, but I can't agree with
 the reasoning.
Yes, it would be nice to see this argued more fully.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread John Kelsey
On Oct 6, 2013, at 6:29 PM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 5, 2013, at 6:12 PM, Ben Laurie wrote:
 I have to take issue with this:
 
 The security is not reduced by adding these suffixes, as this is only
 restricting the input space compared to the original Keccak. If there
 is no security problem on Keccak(M), there is no security problem on
 Keccak(M|suffix), as the latter is included in the former.
 I also found the argument here unconvincing.  After all, Keccak restricted to 
 the set of strings of the form M|suffix reveals that it's input ends with 
 suffix, which the original Keccak did not.  The problem is with the vague 
 nature of no security problem.

They are talking about the change to their padding scheme, in which between 2 
and 4 bits of extra padding are added to the padding scheme that was originally 
proposed for SHA3.  A hash function that works by processing r bits at a time 
till the whole message is processed (every hash function I can think of works 
like this) has to have a padding scheme, so that when someone tries to hash 
some message that's not a multiple of r bits long, the message gets padded out 
to r bits.  

The only security relevance of the padding scheme is that it has to be 
invertible--given the padded string, there must always be exactly one input 
string that could have led to that padded string.  If it isn't invertible, then 
the padding scheme would introduce collisions.  For example, if your padding 
scheme was append zeros until you get the message out to a multiple of r 
bits, I could get collisions on your hash function by taking some message that 
was not a multple of r bits, and appending one or more zeros to it.  Just 
appending a single one bit, followed by as many zeros as are needed to get to a 
multiple of r bits makes a fine padding scheme, so long as the one bit is 
appended to *every* message, even those which start out a multiple of r bits 
long.  

The Keccak team proposed adding a few extra bits to their padding, to add 
support for tree hashing and to distinguish different fixed-length hash 
functions that used the same capacity internally.  They really just need to 
argue that they haven't somehow broken the padding so that it is no longer 
invertible

They're making this argument by pointing out that you could simply stick the 
fixed extra padding bits on the end of a message you processed with the 
original Keccak spec, and you would get the same result as what they are doing. 
 So if there is any problem introduced by sticking those extra bits at the end 
of the message before doing the old padding scheme, an attacker could have 
caused that same problem on the original Keccak by just sticking those extra 
bits on the end of messages before processing them with Keccak.  

-- Jerry

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 6, 2013, at 11:41 PM, John Kelsey wrote:
 ...They're making this argument by pointing out that you could simply stick 
 the fixed extra padding bits on the end of a message you processed with the 
 original Keccak spec, and you would get the same result as what they are 
 doing.  So if there is any problem introduced by sticking those extra bits at 
 the end of the message before doing the old padding scheme, an attacker could 
 have caused that same problem on the original Keccak by just sticking those 
 extra bits on the end of messages before processing them with Keccak.  
This style of argument makes sense for encryption functions, where it's a 
chosen plaintext attack, since the goal is to determine the key.  But it makes 
no sense for a hash function:  If the attacker can specify something about the 
input, he ... knows something about the input!  You need to argue that he knows 
*no more than that* after looking at the output than he did before.

While both Ben and I are convinced that in fact the suffix can't affect 
security, the *specific wording* doesn't really give an argument for why.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 20:00, John Kelsey wrote:

http://keccak.noekeon.org/yes_this_is_keccak.html



Seems the Keccac people take the position that Keccak is actually a way 
of creating hash functions, rather than a specific hash function - the 
created functions may be ridiculously strong, or far too weak.


It also seems NIST think a competition is a way of creating a hash 
function - rather than a way of competitively choosing one.



I didn't follow the competition, but I don't actually see anybody being 
right here. NIST is probably just being incompetent, not malicious, but 
their detractors have a point too.


The problem is that the competition was, or should have been, for a 
single [1] hash function, not for a way of creating hash functions - and 
in my opinion only a single actual hash function based on Keccak should 
have been allowed to enter.


I think that's what actually happened, and an actual function was 
entered. The Keccac people changed it a little between rounds, as is 
allowed, but by the final round the entries should all have been fixed 
in stone.


With that in mind, there is no way the hash which won the competition 
should be changed by NIST.


If NIST do start changing things - whatever the motive  - the benefits 
of openness and fairness of the competition are lost, as is the analysis 
done on the entries.


If NIST do start changing things, then nobody can say SHA-3 was chosen 
by an open and fair competition.


And if that didn't happen, if a specific and well-defined hash was not 
entered, the competition was not open in the first place.




Now in the new SHA-4 competition TBA soon, an actual specific hash 
function based on Keccac may well be the winner - but then what is 
adopted will be what was actually entered.


The work done (for free!) by analysts during the competition will not be 
wasted on a changed specification.




[1] it should have been for a _single_ hash function, not two or 3 
functions with different parameters. I know the two-security-level model 
is popular with NSA and the like, probably for historical export 
reasons, but it really doesn't make any sense for the consumer.


It is possible to make cryptography which we think is resistant to all 
possible/likely attacks. That is what the consumer wants and needs. One 
cryptography which he can trust in, resistant against both his baby 
sister and the NSA.


We can do that. In most cases that sort of cryptography doesn't take 
even measurable resources.



The sole and minimal benefit of having two functions (from a single 
family) - cheaper computation for low power devices, there are no other 
real benefits - is lost in the roar of the costs.


There is a case for having two or more systems - monocultures are 
brittle against failures, and like the Irish Potato Famine a single 
failure can be catastrophic - but two systems in the same family do not 
give the best protection against that.


The disadvantages of having two or more hash functions? For a start, 
people don't know what they are getting. They don't know how secure it 
will be - are you going to tell users whether they are using HASH_lite 
rather than HASH_strong every time? And expect them to understand that?


Second, most devices have to have different software for each function - 
and they have to be able to accept data and operations for more than one 
function as well, which opens up potential security holes.


I could go on, but I hope you get the point already.

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Peter Fairbrother

On 05/10/13 00:09, Dan Kaminsky wrote:

Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
  There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.


That may once have been mostly true, but no longer - now it's mostly false.

In almost every case nowadays the speed at which a device computes a 
SHA-3 hash doesn't matter at all. Devices are either way fast enough, or 
they can't use SHA-3 at all, whether or not it is made 50% faster.




(Now, whether my theory that we stuck with MD5 over SHA1 because
variable field lengths are harder to parse in C -- that's an open
question to say the least.)


:)

-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 6:04 PM, Philipp Gühring p...@futureware.at wrote:
 it makes no sense for a hash function:  If the attacker can specify
 something about the input, he ... knows something about the input!  
 Yes, but since it's standardized, it's public knowledge, and just knowing
 the padding does not give you any other knowledge about the rest.
You're assuming what the argument is claiming to prove.

 What might be though is that Keccak could have some hidden internal
 backdoor function (which I think is very unlikely given what I read about
 it until now, I am just speaking hypothetically) that reduces the
 effective output size, if and only if the input has certain bits at the end.
Well, sure, such a thing *might* exist, though there's no (publicly) known 
technique for embedding such a thing in the kind of combinatorial mixing 
permutation that's at the base of Keccak and pretty much every hash function 
and block encryption function since DES - though the basic idea goes back to 
Shannon in the 1940's.

I will say that the Keccak analysis shows both the strength and the weakness of 
the current (public) state of the art.  Before differential cryptography, 
pretty much everything in this area was guesswork.  In the last 30-40 years 
(depending on whether you want to start with IBM's unpublished knowledge of the 
technique going back, according to Coppersmith, to 1974, or from Biham and 
Shamir's rediscovery and publication in the late 1980's), the basic idea has 
been expanded to a variety of related attacks, with very sophisticated modeling 
of exactly what you can expect to get from attacks under different 
circumstances.  The Keccak analysis goes through a whole bunch of these.  They 
make a pretty convincing argument that (a) no known attack can get anything 
much out of Keccak; (b) it's unlikely that there's an attack along the same 
general lines as currently know attacks that will work against it either.

The problem - and it's an open problem for the whole field - is that none of 
this gets at the question of whether there is some completely different kind of 
attack that would slice right through Keccak or AES or any particular 
algorithm, or any particular class of algorithms.  If you compare the situation 
to that in asymmetric crypto, our asymmetric algorithms are based on clean, 
simple mathematical structures about which we can prove a great deal, but that 
have buried within them particular problems that we believe, on fairly strong 
if hardly completely dispositive evidence, are hard.  For symmetric algorithms, 
we pretty much *rely* on the lack of any simple mathematical structure - which, 
in a Kolmogorov-complexity-style argument, just means there appear to be no 
short descriptions in tractable terms of what these transformations do.  For 
example, if you write the transformations down as Boolean formulas in CNF or 
DNF, the results are extremely large, with irregular, highly inter-twined 
terms.  Without that, various Boolean solvers would quickly cut them to ribbons.

In some sense, DC and related techniques say OK, the complexity of the 
function itself is high, but if I look at the differentials, I can find some 
patterns that are simple enough to work with.

If there's an attack, it's likely to be based on something other than Boolean 
formulas written out in any form we currently work with, or anything based on 
differentials.  It's likely to come out of a representation entirely different 
from anything anyone has thought of.  You'd need that to do key recovery; you'd 
also need it to embed a back door (like a sensitivity to certain input 
patterns).  The fact that no one has found such a thing (publicly, at least) 
doesn't mean it can't exist; we just don't know what we don't know.  Surprising 
results like this have appeared before; in a sense, all of mathematics is about 
finding simple, tractable representations that turn impossible problems into 
soluble ones.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-06 Thread Ben Laurie
On 5 October 2013 20:18, james hughes hugh...@mac.com wrote:
 On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

 From the authors: NIST's current proposal for SHA-3 is a subset of the 
 Keccak family, one can generate the test vectors for that proposal using 
 the Kecca kreference code. and this shows that the [SHA-3] cannot contain 
 internal changes to the algorithm.

 The process of setting the parameters is an important step in 
 standardization. NIST has done this and the authors state that this has not 
 crippled the algorithm.

 I bet this revelation does not make it to Slashdot…

 Can we put this to bed now?

I have to take issue with this:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(M|suffix), as the latter is included in the former.

I could equally argue, to take an extreme example:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(preimages of Keccak(42)), as the latter is included in the
former.

In other words, I have to also make an argument about the nature of
the suffix and how it can't have been chosen s.t. it influences the
output in a useful way.

I suspect I should agree with the conclusion, but I can't agree with
the reasoning.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-06 Thread Christoph Anton Mitterer
On Sat, 2013-10-05 at 12:18 -0700, james hughes wrote:
 and the authors state that
You know why other people than the authors are doing cryptoanalysis on
algorithms? Simply because the authors may also oversee something in the
analysis of their own algorithm.

So while the argument the original authors said it's fine sounds quite
convincing, it is absolutely not - at least not per se.
The authors may be wrong or they may even be bought as well by NSA or
some other organisation.

Of course this doesn't mean that I’d have indication that any of this
was the case... I just don't like this narrow-minded they said it's
okay, thus we must kill of any discussion argument, which has been
dropped several times now.


Cheers,
Chris.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3 and selecting algorithms for speed

2013-10-05 Thread John Kelsey
Most applications of crypto shouldn't care much about performance of the 
symmetric crypto, as that's never the thing that matters for slowing things 
down.  But performance continues to matter in competitions and algorithm 
selection for at least three reasons:

a.  We can measure performance, whereas security is very hard to measure.  
There are a gazillion ways to measure performance, but each one gives you an 
actual set of numbers.  Deciding whether JH or Grostl is more likely to fall to 
cryptanalytic attack in its lifetime is an exercise in reading lots of papers, 
extrapolating, and reading tea leaves.

b.  There are low-end environments where performance really does matter.  Those 
often have rather different properties than other environments--for example, 
RAM or ROM (for program code and S-boxes) may be at a premium.  

c.  There are environments where someone is doing a whole lot of symmetric 
crypto at once--managing the crypto for lots of different connections, say.  In 
that case, your symmetric algorithm's speed may also have a practical impact.  
(Though it's still likely to be swamped by your public key algorithms.)   

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Dan Kaminsky
Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
 There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.

(Now, whether my theory that we stuck with MD5 over SHA1 because variable
field lengths are harder to parse in C -- that's an open question to say
the least.)

On Tuesday, October 1, 2013, Ray Dillinger wrote:

 What I don't understand here is why the process of selecting a standard
 algorithm for cryptographic primitives is so highly focused on speed.

 We have machines that are fast enough now that while speed isn't a non
 issue, it is no longer nearly as important as the process is giving it
 precedence for.

 Our biggest problem now is security,  not speed. I believe that it's a bit
 silly to aim for a minimum acceptable security achievable within the
 context of speed while experience shows that each new class of attacks is
 usually first seen against some limited form of the cipher or found to be
 effective only if the cipher is not carried out to a longer process.



  Original message 
 From: John Kelsey crypto@gmail.com javascript:_e({}, 'cvml',
 'crypto@gmail.com');
 Date: 09/30/2013 17:24 (GMT-08:00)
 To: cryptography@metzdowd.com javascript:_e({}, 'cvml',
 'cryptography@metzdowd.com'); List 
 cryptography@metzdowd.comjavascript:_e({}, 'cvml', 
 'cryptography@metzdowd.com');

 Subject: [Cryptography] Sha3


 If you want to understand what's going on wrt SHA3, you might want to look
 at the nist website, where we have all the slide presentations we have been
 giving over the last six months detailing our plans.  There is a lively
 discussion going on at the hash forum on the topic.

 This doesn't make as good a story as the new sha3 being some hell spawn
 cooked up in a basement at Fort Meade, but it does have the advantage that
 it has some connection to reality.

 You might also want to look at what the Keccak designers said about what
 the capacities should be, to us (they put their slides up) and later to
 various crypto conferences.

 Or not.

 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com javascript:_e({}, 'cvml',
 'cryptography@metzdowd.com');
 http://www.metzdowd.com/mailman/listinfo/cryptography


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. 
If you're going to choose a single standard cryptographic algorithm, you have 
to consider all the places it will be used.  These range from very busy front 
ends - where people to this day complain (perhaps with little justification, 
but they believe that for them it's a problem) that doing an RSA operation per 
incoming connection is too expensive (and larger keys will only make it worse), 
to phones (where power requirements are more of an issue than raw speed) to 
various embedded devices (where people still use cheap devices because every 
penny counts) to all kinds of devices that have to run for a long time off a 
local battery or even off of microwatts of power transferred to an unpowered 
device by a reader.
 
 We have machines that are fast enough now that while speed isn't a non issue, 
 it is no longer nearly as important as the process is giving it precedence 
 for.
We do have such machines, but we also have - and will have, for the foreseeable 
future - machines for which this is *not* the case.

Deciding on where to draw the line and say I don't care if you can support 
this algorithm in a sensor designed to be put in a capsule and swallowed to 
transmit pictures of the GI tract for medical analysis is not a scientific 
question; it's a policy question.

 Our biggest problem now is security,  not speed. I believe that it's a bit 
 silly to aim for a minimum acceptable security achievable within the context 
 of speed while experience shows that each new class of attacks is usually 
 first seen against some limited form of the cipher or found to be effective 
 only if the cipher is not carried out to a longer process.  
The only problem with this argument is that the biggest problem is hard to 
pin down.  There's little evidence that the symmetric algorithms we have today 
are significant problems.  There is some evidence that some of the asymmetric 
algorithms may have problems due to key size, or deliberate subversion.  Fixing 
the first of these does induce significant costs; fixing the second first of 
all requires some knowledge of the nature of the subversion.  But beyond all 
this the biggest problems we've seen have to do with other components, like 
random number generators, protocols, infiltration of trusted systems, and so 
on.  None of these is amenable to defense by removing constraints on 
performance.  (The standardized random number generator that ignored 
performance to be really secure turned out to be anything but!)

We're actually moving in an interesting direction.  At one time, the cost of 
decent crypto algorithms was high enough to be an issue for most hardware.  DES 
at the speed of the original 10Mb/sec Ethernet was an significant engineering 
accomplishment!  These days, even the lowest end traditional computer has 
plenty of spare CPU to run even fairly expensive algorithms - but at the same 
time we're pushing more and more into a world of tiny, low-powered machines 
everywhere.  The ratio of speed and supportable power consumption and memory 
between the average large machine and the average tiny machine is wider 
than it's ever been.  At the low end, the exposure is different:  An attacker 
typically has to be physically close to even talk to the device, there are only 
a small number of communications partners, and any given device has relatively 
little information within it.  Perhaps a lower level of security is appropriate 
in such situations.  (Of course, as we've seen with SCA
 DA systems, there's a temptation to just put these things directly on the 
Internet - in which case all these assumptions fail.  A higher-level problem:  
If you take this approach, you need to make sure that the devices are only 
accessible through a gateway with sufficient power to run stronger algorithms.  
How do you do that?)

So perhaps the assumption that needs to be reconsidered is that we can design a 
single algorithm suitable across the entire spectrum.  Currently we have 
SHA-128 and SHA-256, but exactly why one should choose one or the other has 
never been clear - SHA-256 is somewhat more expensive, but I can't think of any 
examples where SHA-128 would be practical but SHA-256 would not.  In practice, 
when CPU is thought to be an issue (rightly or wrongly), people have gone with 
RC4 - standards be damned.

It is worth noting that NSA seems to produce suites of algorithms optimized for 
particular uses and targeted for different levels of security.  Maybe it's time 
for a similar approach in public standards.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 12:27 AM, David Johnston d...@deadhat.com wrote:

  On 10/1/2013 2:34 AM, Ray Dillinger wrote:

 What I don't understand here is why the process of selecting a standard
 algorithm for cryptographic primitives is so highly focused on speed. ~


 What makes you think Keccak is faster than the alternatives that were not
 selected? My implementations suggest otherwise.
 I thought the main motivation for selecting Keccak was Sponge good.


You mean Keccak is spongeworthy.


I do not accept the argument that the computational work factor should be
'balanced' in the way suggested.

The security of a system is almost always better measured by looking at the
work factor for breaking an individual message rather than the probability
that two messages might be generated in circumstances that cancel each
other out.

Given adequate cryptographic precautions (e.e. random serial), a
certificate authority can still use MD5 with an acceptable level of
security even with the current attacks. They would be blithering idiots to
do so of course but Flame could have been prevented with certain
precautions.

If a hash has a 256 bit output I know that I cannot use it in a database if
the number of records approaches 2^128. But that isn't really a concern to
me. The reason I use a 256 bit hash is because I want a significant safety
margin on the pre-image work factor.

If I was really confident that the 2^128 work factor really is 2^128 then I
would be happy using a 128 bit hash for most designs. In fact in
PRISM-Proof Email I am currently using a 226 bit Subject Key Identifier
because I can encode that in BASE64 and the result is about the same length
as a PGP fingerprint. But I really do want that 2^256 work factor.

If Keccak was weakened in the manner proposed I would probably use the 512
bit version instead and truncate.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread james hughes

On Oct 3, 2013, at 9:27 PM, David Johnston d...@deadhat.com wrote:

 On 10/1/2013 2:34 AM, Ray Dillinger wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. ~
 
 What makes you think Keccak is faster than the alternatives that were not 
 selected? My implementations suggest otherwise.
 I thought the main motivation for selecting Keccak was Sponge good.

I agree: Sponge Good, Merkle–Damgård Bad. Simple enough. 

I believe this thread is not about the choice of Keccak for SHA3, it is about 
NIST's changes of Keccak for SHA3. 

[Instead of pontificating at length based on conjecture and conspiracy theories 
and smearing reputations based on nothing other than hot air] Someone on this 
list must know the authors of Keccak. Why not ask them. They are the ones that 
know the most about the algorithm, why the parameters are what they are and 
what the changes mean for their vision. 

Here is my question for them: Given the light of the current situation, what 
is your current opinion of NIST's changes of Keccak as you specified it to 
SHA-3 as NIST standardized it? 

If the Keccak authors are OK with the changes, who are we to argue about these 
chances? 

If the Keccak authors don't like the changes, given the situation NIST is in, I 
bet NIST will have no recourse but to re-open the SHA3 discussion.

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread John Kelsey
http://keccak.noekeon.org/yes_this_is_keccak.html

--John___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread radix42
Jerry Leichter wrote:
Currently we have SHA-128 and SHA-256, but exactly why one should choose one 
or the other has never been clear - SHA-256 is somewhat more expensive, but 
I can't think of any examples where SHA-128 would be practical but SHA-256 
would not.  In practice, when CPU is thought to be an issue (rightly or 
wrongly), people have gone with RC4 - standards be damned.

SHA-224/256 (there is no SHA-128) use 32-bit words, SHA-384/512 uses 64-bit 
words. That difference is indeed a very big deal in embedded device 
applications. SHA-3 uses only 64-bit words, which will likely preclude it being 
used in most embedded devices for the foreseeable future. 

-David Mercer

David Mercer
Portland, OR

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread James A. Donald

On 2013-10-05 16:40, james hughes wrote:

Instead of pontificating at length based on conjecture and conspiracy

 theories and smearing reputations based on nothing other than hot air

But there really is a conspiracy, which requires us to consider 
conjectures as serious risks, and people deserve to have their 
reputations smeared for the appearance of being in bed with that conspiracy.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 5, 2013, at 11:54 AM, radi...@gmail.com wrote:
 Jerry Leichter wrote:
 Currently we have SHA-128 and SHA-256, but exactly why one should choose 
 one or the other has never been clear - SHA-256 is somewhat more 
 expensive, but I can't think of any examples where SHA-128 would be 
 practical but SHA-256 would not.  In practice, when CPU is thought to be an 
 issue (rightly or wrongly), people have gone with RC4 - standards be 
 damned.
 
 SHA-224/256 (there is no SHA-128) use 32-bit words, SHA-384/512 uses 64-bit 
 words. That difference is indeed a very big deal in embedded device 
 applications. SHA-3 uses only 64-bit words, which will likely preclude it 
 being used in most embedded devices for the foreseeable future. 
Oops - acronym confusion between brain and keyboard.  I meant to talk about 
AES-128 and AES-256.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread james hughes
On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

From the authors: NIST's current proposal for SHA-3 is a subset of the Keccak 
family, one can generate the test vectors for that proposal using the Kecca 
kreference code. and this shows that the [SHA-3] cannot contain internal 
changes to the algorithm.

The process of setting the parameters is an important step in standardization. 
NIST has done this and the authors state that this has not crippled the 
algorithm. 

I bet this revelation does not make it to Slashdot… 

Can we put this to bed now? 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-04 Thread David Johnston

On 10/1/2013 2:34 AM, Ray Dillinger wrote:
What I don't understand here is why the process of selecting a 
standard algorithm for cryptographic primitives is so highly focused 
on speed. ~


What makes you think Keccak is faster than the alternatives that were 
not selected? My implementations suggest otherwise.

I thought the main motivation for selecting Keccak was Sponge good.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-01 Thread James A. Donald

On 2013-10-01 10:24, John Kelsey wrote:

If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website


If you want to understand what is going on with SHA3, and you believe 
that NIST is frank, open, honest, and has no ulterior motives, you might 
want to look at the NIST website.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-01 Thread Ray Dillinger
What I don't understand here is why the process of selecting a standard 
algorithm for cryptographic primitives is so highly focused on speed. 

We have machines that are fast enough now that while speed isn't a non issue, 
it is no longer nearly as important as the process is giving it precedence for. 
 

Our biggest problem now is security,  not speed. I believe that it's a bit 
silly to aim for a minimum acceptable security achievable within the context of 
speed while experience shows that each new class of attacks is usually first 
seen against some limited form of the cipher or found to be effective only if 
the cipher is not carried out to a longer process.  

 Original message 
From: John Kelsey crypto@gmail.com 
Date: 09/30/2013  17:24  (GMT-08:00) 
To: cryptography@metzdowd.com List cryptography@metzdowd.com 
Subject: [Cryptography] Sha3 
 
If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website, where we have all the slide presentations we have been giving 
over the last six months detailing our plans.  There is a lively discussion 
going on at the hash forum on the topic.  

This doesn't make as good a story as the new sha3 being some hell spawn cooked 
up in a basement at Fort Meade, but it does have the advantage that it has some 
connection to reality.

You might also want to look at what the Keccak designers said about what the 
capacities should be, to us (they put their slides up) and later to various 
crypto conferences.  

Or not.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-01 Thread Ray Dillinger
Okay, I didn't express myself very well the first time I tried to say this.   
But as I see it,  we're still basing the design of crypto algorithms on 
considerations that had the importance we're treating them as having about 
twelve years ago. 

To make an analogy, it's like making tires when you need to have a ten thousand 
mile warranty.  When rubber is terribly expensive and the cars are fairly slow, 
 you make a tire that probably won't be good for twelve thousand miles. But 
it's now years later.  Rubber has gotten cheap and the cars are moving a lot 
faster and the cost of repairing or replacing crashed vehicles is now 
dominating the cost of rubber. Even if tire failure accounts for only a small 
fraction of that cost,  why shouldn't we be a lot more conservative in the 
design of our tires? A little more rubber is cheap and it would be nice to know 
that the tires will be okay even if the road turns out to be gravel. 

This is where I see crypto designers.  Compute power is cheaper than it's ever 
been but we're still treating it as though its importance hasn't changed. More 
is riding on the cost of failures and we've seen how failures tend to happen.  
Most of the attacks we've seen wouldn't have worked on the same ciphers if the 
ciphers had been implemented in a more conservative way.   A few more rounds of 
a block cipher or a wider hidden state for a PRNG, or longer RSA keys,  even 
though we didn't know at the time what we were protecting from, would have kept 
most of these things safe for years after the attacks or improved factoring 
methods were discovered.   

Engineering is about achieving the desired results using a minimal amount of 
resources.  When compute power was precious that meant minimizing compute 
power. But the cost now is mostly in redeploying and upgrading extant 
infrastructure.  And in a lot of cases we're having to do that because the 
crypto is now seen to be too weak.  When we try to minimize our use of 
resources,  we need to value them accurately. 

To me that means making systems that won't need to be replaced as often.  And 
just committing more of the increasingly cheap resource of compute power would 
have achieved that given most of the breaks we've seen in the past few years. 

To return to our road safety metaphor,  we're now asking ourselves if we're 
still confident in that ten thousand mile warranty now that we've discovered 
that the company that puts up road signs has also been contaminating our rubber 
formula, sneakily cutting brake lines,  and scattering nails on the road. Damn, 
it's enough to make you wish you'd overdesigned, isn't it?

 Original message 
From: John Kelsey crypto@gmail.com 
Date: 09/30/2013  17:24  (GMT-08:00) 
To: cryptography@metzdowd.com List cryptography@metzdowd.com 
Subject: [Cryptography] Sha3 
 
If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website, where we have all the slide presentations we have been giving 
over the last six months detailing our plans.  There is a lively discussion 
going on at the hash forum on the topic.  

This doesn't make as good a story as the new sha3 being some hell spawn cooked 
up in a basement at Fort Meade, but it does have the advantage that it has some 
connection to reality.

You might also want to look at what the Keccak designers said about what the 
capacities should be, to us (they put their slides up) and later to various 
crypto conferences.  

Or not.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-01 Thread Christoph Anton Mitterer
On Tue, 2013-10-01 at 02:34 -0700, Ray Dillinger wrote:
 What I don't understand here is why the process of selecting a
 standard algorithm for cryptographic primitives is so highly focused
 on speed. 
 
 
 We have machines that are fast enough now that while speed isn't a non
 issue, it is no longer nearly as important as the process is giving it
 precedence for.  
 
 
 Our biggest problem now is security,  not speed. I believe that it's a
 bit silly to aim for a minimum acceptable security achievable within
 the context of speed while experience shows that each new class of
 attacks is usually first seen against some limited form of the cipher
 or found to be effective only if the cipher is not carried out to a
 longer process.  

Absolutely agreeing... I mean that is the most important point about
crypto at all - being secure.
And if one is in doubt (and probably even when not), better use a very
big security margin, which in the SHA3 case would mean, rather take high
multiples of bit lengths and capacity than what seems conservatively
secure enough.

The argument, that attackers don't penetrate but rather circumvent
cryptography doesn't count much at all, IMHO.
Sure that's what happens in practise, but if we hook up on that, we
could more or less drop any cryptography for say 98% of mankind which
use insecure (or even backdoored) systems like Windows, MacOS, Flash,
etc. pp..

Obviously, performance is an issue for some systems (especially
embedded) but an algo that is fast enough, but potentially not secure
enough is absolutely worthless[0].

Sure, some people utilise the FUD argument now,... basically pointing
that we have no strong reason to believe that e.g. Keccack with the
newly proposed parameters from NIST isn't secure enough.
But when we should have learned one thing from the whole NSA/friends
scandal is ... we really don't have much of an idea how far these guys
are up to - neither in terms of mathematics, nor in terms of raw
computing power (when the public already knows about facilities like
that Utah data centre - one can probably fairly well expect that dozens
of these exist which are unknown).

Cheers,
Chris.


[0] And if you want a fast hash algorithm that is not to be used in
cryptography, we have plenty of other solutions already.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography