Re: [Cryptography] Sha3

2013-10-06 Thread Ben Laurie
On 5 October 2013 20:18, james hughes hugh...@mac.com wrote:
 On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

 From the authors: NIST's current proposal for SHA-3 is a subset of the 
 Keccak family, one can generate the test vectors for that proposal using 
 the Kecca kreference code. and this shows that the [SHA-3] cannot contain 
 internal changes to the algorithm.

 The process of setting the parameters is an important step in 
 standardization. NIST has done this and the authors state that this has not 
 crippled the algorithm.

 I bet this revelation does not make it to Slashdot…

 Can we put this to bed now?

I have to take issue with this:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(M|suffix), as the latter is included in the former.

I could equally argue, to take an extreme example:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(preimages of Keccak(42)), as the latter is included in the
former.

In other words, I have to also make an argument about the nature of
the suffix and how it can't have been chosen s.t. it influences the
output in a useful way.

I suspect I should agree with the conclusion, but I can't agree with
the reasoning.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-06 Thread Christoph Anton Mitterer
On Sat, 2013-10-05 at 12:18 -0700, james hughes wrote:
 and the authors state that
You know why other people than the authors are doing cryptoanalysis on
algorithms? Simply because the authors may also oversee something in the
analysis of their own algorithm.

So while the argument the original authors said it's fine sounds quite
convincing, it is absolutely not - at least not per se.
The authors may be wrong or they may even be bought as well by NSA or
some other organisation.

Of course this doesn't mean that I’d have indication that any of this
was the case... I just don't like this narrow-minded they said it's
okay, thus we must kill of any discussion argument, which has been
dropped several times now.


Cheers,
Chris.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread James A. Donald

On 2013-10-04 23:57, Phillip Hallam-Baker wrote:

Oh and it seems that someone has murdered the head of the IRG cyber
effort. I condemn it without qualification.


I endorse it without qualification.  The IRG are bad guys and need 
killing - all of them, every single one.


War is an honorable profession, and is in our nature.  The lion does no 
wrong to kill the deer, and the warrior does no wrong to fight in a just 
war, for we are still killer apes.


The problem with the NSA and NIST is not that they are doing warlike 
things, but that they are doing warlike things against their own people.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread John Kelsey
One thing that seems clear to me:  When you talk about algorithm flexibility in 
a protocol or product, most people think you are talking about the ability to 
add algorithms.  Really, you are talking more about the ability to *remove* 
algorithms.  We still have stuff using MD5 and RC4 (and we'll probably have 
stuff using dual ec drbg years from now) because while our standards have lots 
of options and it's usually easy to add new ones, it's very hard to take any 
away.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] System level security in low end environments

2013-10-06 Thread Jerry Leichter
On Oct 5, 2013, at 2:00 PM, John Gilmore wrote:
 b.  There are low-end environments where performance really does
 matter.  Those often have rather different properties than other
 environments--for example, RAM or ROM (for program code and S-boxes)
 may be at a premium.
 
 Such environments are getting very rare these days.  For example, an
 electrical engineer friend of mine was recently working on designing a
 cheap aimable mirror, to be deployed by the thousands to aim sunlight
 at a collector.  He discovered that connectors and wires are more
 expensive than processor chips these days!...
He had a big advantage:  He had access to power, since the system has to run 
the motors (which probably require an order of magnitude or more power than all 
his electronics).  These days, the limits on many devices are expressible in 
watts/compute for some measure of compute.  But less often noticed (because 
most designers are handed a fixed device and have to run with it) dynamic RAM 
also draws power, even if you aren't doing any computation.  (Apple, at least, 
appears to have become very aware of this early on and designs their phones to 
make do with what most people would consider to be very small amounts of RAM - 
though perhaps not those of us who grew up in 16-bit-processor days. :-)  Some 
other makers didn't include this consideration in their designs, built with 
larger RAM's - sometimes even advertising that as a plus - and paid for it in 
reduced battery life.)

A couple of years back, I listened to a talk about where the next generation of 
wireless communications will be.  Historically, every so many years, we move up 
a decade (as in factor of 10) in the frequencies we use to build our 
communications devices.  We're approaching the final such move, to the edge of 
the TeraHertz range.  (Another factor of 10 gets you to stuff that's more like 
infrared than radio - useful, but in very different ways.)  What's of course 
great about moving to higher frequencies is that you get much more bandwidth - 
there's 10 times as much bandwidth from 10GHz to 100GHz as there was from DC up 
to 10GHz.  And the power required to transmit at a given bit rate goes down 
with the bandwidth, and further since near-THz radiation is highly directional 
you're not spewing it out over a sphere - it goes pretty much only where it's 
needed.  So devices operating in the near-THz range will require really tiny 
amounts of power.  Also, they will be very small, as the 
 wavelengths are comparable to the size of a chip.  In fact, the talk showed 
pictures of classic antenna geometries - dipoles, Yagi's - etched directly onto 
chips.

Near-THz frequencies are highly directional, so you need active tracking - but 
the computes to do that can go on chip along with the antennas they control.  
You'd guess (at least I did until I learned better) that such signals don't 
travel far, but in fact you have choices there:  There are bands in which air 
absorption is high, which is ideal for, say, a WiFi replacement (which would 
have some degree of inherent security as the signal would die off very 
rapidly).  There are other bands that have quite low air absorption.  None of 
these frequencies are likely to propagate far through many common building 
materials, however.  So we're looking at designs with tiny, extremely low 
powered, active repeaters all over the place.  (I visualize a little device you 
stick on a window that uses solar power to communicate with a box on a pole 
outside, and then internally to similar scattered devices to fill your house 
with an extremely high speed Internet connection.)

The talk I heard was from a university group doing engineering 
characterization - i.e., this stuff was out of the lab and at the point where 
you could construct samples easily; the job now was to come up with all the 
design rules and tradeoff tables and simulation techniques that you need before 
you can build commercial products.  They thought this might be 5G telephone 
technology.  Expect to see the first glimmers in, say, 5 years.

Anyway, this is (a) a confirmation of your point that computational elements 
are now so cheap that components like wires are worth replacing; but (b) unlike 
the case with the mirror controllers, we'll want to build these things in large 
numbers and scatter them all over the place, so they will have to make do with 
very small amounts of power.  (For the first inkling of what this is like, 
think of RFID chips - already out there in the billions.)

So, no, I don't think you can assume that efficiency considerations will go 
away.  If you want pervasive security in your pervasive compute architecture, 
you're going to have to figure out how make it work when many of the nodes in 
your architecture are tiny and can't afford to drain power run complicated 
algorithms.
-- Jerry

___
The cryptography 

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-06 Thread Nico Williams
On Fri, Oct 4, 2013 at 11:20 AM, Ray Dillinger b...@sonic.net wrote:
 So, it seems that instead of AES256(key) the cipher in practice should be
 AES256(SHA256(key)).

More like: use a KDF and separate keys (obtained by applying a KDF to
a root key) for separate but related purposes.

For example, if you have a full-duplex pipe with a single pre-shared
secret key then: a) you should want separate keys for each direction
(so you don't need a direction flag in the messages to deal with
reflection attacks), b) you should derive a new set of keys for each
connection if there are multiple connections between the same two
peers.  And if you're using an AEAD-by-generic-composition cipher mode
then you'll want separate keys for data authentication vs. data
encryption.

The KDF might well be SHA256, but doesn't have to be.  Depending on
characteristics of the original key you might need a more complex KDF
(e.g., a PBKDF if the original is a human-memorable password).  This
(and various other details about accepted KDF technology that I'm
eliding) is the reason that you should want to think of a KDF rather
than a hash function.

Suppose some day you want to switch to a cipher with a different key
size.  If all you have to do is tell the KDF how large the key is,
then it's easy, but if you have to change the KDF along with the
cipher then you have more work to do, work that might or might not be
easy.  Being able to treat the protocol elements as modular has
significant advantages -and some pitfalls- over more monolythic
constructions.

Nico
--
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography