Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-05 Thread Ray Dillinger

On 10/03/2013 06:59 PM, Watson Ladd wrote:

On Thu, Oct 3, 2013 at 3:25 PM,leich...@lrw.com  wrote:


On Oct 3, 2013, at 12:21 PM, Jerry Leichterleich...@lrw.com  wrote:

As *practical attacks today*, these are of no interest - related key

attacks only apply in rather unrealistic scenarios, even a 2^119 strength
is way beyond any realistic attack, and no one would use a reduced-round
version of AES-256.



Expanding a bit on what I said:  Ideally, you'd like a cryptographic
algorithm let you build a pair of black boxes.  I put my data and a key
into my black box, send you the output; you put the received data and the
same key (or a paired key) into your black box; and out comes the data I
sent you, fully secure and authenticated.  Unfortunately, we have no clue
how to build such black boxes.  Even if the black boxes implement just the
secrecy transformation for a stream of blocks (i.e., they are symmetric
block ciphers), if there's a related key attack, I'm in danger if I haven't
chosen my keys carefully enough.


So, it seems that instead of AES256(key) the cipher in practice should be
AES256(SHA256(key)).

Is it not the case that (assuming SHA256 is not broken) this defines a cipher
effectively immune to the related-key attack?

Bear


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-05 Thread Ray Dillinger

On 10/04/2013 01:23 AM, James A. Donald wrote:

On 2013-10-04 09:33, Phillip Hallam-Baker wrote:

The design of WSDL and SOAP is entirely due to the need to impedance match COM 
to HTTP.


That is fairly horrifying, as COM was designed for a single threaded 
environment, and becomes and incomprehensible and extraordinarily inefficient 
security hole
in a multi threaded environment.


Well, yes, as a matter of fact DCOM was always incomprehensible
and extraordinarily inefficient.  However, it wasn't so much of
a security hole in the remotely crashable bug sense.  It made
session management into something of a difficult problem though.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3 and selecting algorithms for speed

2013-10-05 Thread John Kelsey
Most applications of crypto shouldn't care much about performance of the 
symmetric crypto, as that's never the thing that matters for slowing things 
down.  But performance continues to matter in competitions and algorithm 
selection for at least three reasons:

a.  We can measure performance, whereas security is very hard to measure.  
There are a gazillion ways to measure performance, but each one gives you an 
actual set of numbers.  Deciding whether JH or Grostl is more likely to fall to 
cryptanalytic attack in its lifetime is an exercise in reading lots of papers, 
extrapolating, and reading tea leaves.

b.  There are low-end environments where performance really does matter.  Those 
often have rather different properties than other environments--for example, 
RAM or ROM (for program code and S-boxes) may be at a premium.  

c.  There are environments where someone is doing a whole lot of symmetric 
crypto at once--managing the crypto for lots of different connections, say.  In 
that case, your symmetric algorithm's speed may also have a practical impact.  
(Though it's still likely to be swamped by your public key algorithms.)   

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread John Kelsey
On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
...
 Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had no 
 impact on the security of certificates issued using MD5 until the attack was 
 dramatically improved and the second pre-image attack became feasible.

Just a couple nitpicks: 

a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but 
rather a collision attack from a chosen IV.  

b.  Preimages with MD5 still are not practical.  What is practical is using the 
very efficient modern collision attacks to do a kind of herding attack, where 
you commit to one hash and later get some choice about which message gives that 
hash.  

...
 Proofs are good for getting tenure. They produce papers that are very 
 citable. 

There are certainly papers whose only practical importance is getting a smart 
cryptographer tenure somewhere, and many of those involve proofs.  But there's 
also a lot of value in being able to look at a moderately complicated thing, 
like a hash function construction or a block cipher chaining mode, and show 
that the only way anything can go wrong with that construction is if some 
underlying cryptographic object has a flaw.  Smart people have proposed 
chaining modes that could be broken even when used with a strong block cipher.  
You can hope that security proofs will keep us from doing that.  

Now, sometimes the proofs are wrong, and almost always, they involve a lot of 
simplification of reality (like most proofs aren't going to take low-entropy 
RNG outputs into account).  But they still seem pretty valuable to me for 
real-world things.  Among other things, they give you a completely different 
way of looking at the security of a real-world thing, with different people 
looking over the proof and trying to attack things.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [tor-talk] Guardian Tor article

2013-10-05 Thread grarpamp
Some have said...

 this [Snowden meta arena] has been a subject of discussion on
 the [various] lists as well

 Congrats, torproject :-D

 Tor Stinks means you're doing it right; good job Tor devs :)

 good news everybody; defense in depth is effective and practical!

Yes, fine work all hands, everyone have a round at their favorite
pub/equivalent tonight.


 Of course, this is also from 2007. It's been a long time since then.

Yet whether from 2007 or last week... when Monday rolls around, we
must channel all this joy and get back to work. For the risks and
attackers that we all face are real, motivated, well funded, and
do not play fair by any set of rules. They do not stop and neither
can we. Wins that do not result in elimination from the game are
but temporary gains. We must always be better... train, practice,
discipline, and enter ourselves into every race... leaving only a
continuous cloud of dust behind for our adversaries to choke on.

Till Monday, I got this round :)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Dan Kaminsky
Because not being fast enough means you don't ship.  You don't ship, you
didn't secure anything.

Performance will in fact trump security.  This is the empirical reality.
 There's some budget for performance loss. But we have lots and lots of
slow functions. Fast is the game.

(Now, whether my theory that we stuck with MD5 over SHA1 because variable
field lengths are harder to parse in C -- that's an open question to say
the least.)

On Tuesday, October 1, 2013, Ray Dillinger wrote:

 What I don't understand here is why the process of selecting a standard
 algorithm for cryptographic primitives is so highly focused on speed.

 We have machines that are fast enough now that while speed isn't a non
 issue, it is no longer nearly as important as the process is giving it
 precedence for.

 Our biggest problem now is security,  not speed. I believe that it's a bit
 silly to aim for a minimum acceptable security achievable within the
 context of speed while experience shows that each new class of attacks is
 usually first seen against some limited form of the cipher or found to be
 effective only if the cipher is not carried out to a longer process.



  Original message 
 From: John Kelsey crypto@gmail.com javascript:_e({}, 'cvml',
 'crypto@gmail.com');
 Date: 09/30/2013 17:24 (GMT-08:00)
 To: cryptography@metzdowd.com javascript:_e({}, 'cvml',
 'cryptography@metzdowd.com'); List 
 cryptography@metzdowd.comjavascript:_e({}, 'cvml', 
 'cryptography@metzdowd.com');

 Subject: [Cryptography] Sha3


 If you want to understand what's going on wrt SHA3, you might want to look
 at the nist website, where we have all the slide presentations we have been
 giving over the last six months detailing our plans.  There is a lively
 discussion going on at the hash forum on the topic.

 This doesn't make as good a story as the new sha3 being some hell spawn
 cooked up in a basement at Fort Meade, but it does have the advantage that
 it has some connection to reality.

 You might also want to look at what the Keccak designers said about what
 the capacities should be, to us (they put their slides up) and later to
 various crypto conferences.

 Or not.

 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com javascript:_e({}, 'cvml',
 'cryptography@metzdowd.com');
 http://www.metzdowd.com/mailman/listinfo/cryptography


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 1, 2013, at 5:34 AM, Ray Dillinger b...@sonic.net wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. 
If you're going to choose a single standard cryptographic algorithm, you have 
to consider all the places it will be used.  These range from very busy front 
ends - where people to this day complain (perhaps with little justification, 
but they believe that for them it's a problem) that doing an RSA operation per 
incoming connection is too expensive (and larger keys will only make it worse), 
to phones (where power requirements are more of an issue than raw speed) to 
various embedded devices (where people still use cheap devices because every 
penny counts) to all kinds of devices that have to run for a long time off a 
local battery or even off of microwatts of power transferred to an unpowered 
device by a reader.
 
 We have machines that are fast enough now that while speed isn't a non issue, 
 it is no longer nearly as important as the process is giving it precedence 
 for.
We do have such machines, but we also have - and will have, for the foreseeable 
future - machines for which this is *not* the case.

Deciding on where to draw the line and say I don't care if you can support 
this algorithm in a sensor designed to be put in a capsule and swallowed to 
transmit pictures of the GI tract for medical analysis is not a scientific 
question; it's a policy question.

 Our biggest problem now is security,  not speed. I believe that it's a bit 
 silly to aim for a minimum acceptable security achievable within the context 
 of speed while experience shows that each new class of attacks is usually 
 first seen against some limited form of the cipher or found to be effective 
 only if the cipher is not carried out to a longer process.  
The only problem with this argument is that the biggest problem is hard to 
pin down.  There's little evidence that the symmetric algorithms we have today 
are significant problems.  There is some evidence that some of the asymmetric 
algorithms may have problems due to key size, or deliberate subversion.  Fixing 
the first of these does induce significant costs; fixing the second first of 
all requires some knowledge of the nature of the subversion.  But beyond all 
this the biggest problems we've seen have to do with other components, like 
random number generators, protocols, infiltration of trusted systems, and so 
on.  None of these is amenable to defense by removing constraints on 
performance.  (The standardized random number generator that ignored 
performance to be really secure turned out to be anything but!)

We're actually moving in an interesting direction.  At one time, the cost of 
decent crypto algorithms was high enough to be an issue for most hardware.  DES 
at the speed of the original 10Mb/sec Ethernet was an significant engineering 
accomplishment!  These days, even the lowest end traditional computer has 
plenty of spare CPU to run even fairly expensive algorithms - but at the same 
time we're pushing more and more into a world of tiny, low-powered machines 
everywhere.  The ratio of speed and supportable power consumption and memory 
between the average large machine and the average tiny machine is wider 
than it's ever been.  At the low end, the exposure is different:  An attacker 
typically has to be physically close to even talk to the device, there are only 
a small number of communications partners, and any given device has relatively 
little information within it.  Perhaps a lower level of security is appropriate 
in such situations.  (Of course, as we've seen with SCA
 DA systems, there's a temptation to just put these things directly on the 
Internet - in which case all these assumptions fail.  A higher-level problem:  
If you take this approach, you need to make sure that the devices are only 
accessible through a gateway with sufficient power to run stronger algorithms.  
How do you do that?)

So perhaps the assumption that needs to be reconsidered is that we can design a 
single algorithm suitable across the entire spectrum.  Currently we have 
SHA-128 and SHA-256, but exactly why one should choose one or the other has 
never been clear - SHA-256 is somewhat more expensive, but I can't think of any 
examples where SHA-128 would be practical but SHA-256 would not.  In practice, 
when CPU is thought to be an issue (rightly or wrongly), people have gone with 
RC4 - standards be damned.

It is worth noting that NSA seems to produce suites of algorithms optimized for 
particular uses and targeted for different levels of security.  Maybe it's time 
for a similar approach in public standards.

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-05 Thread Jerry Leichter
On Oct 3, 2013, at 7:33 PM, Phillip Hallam-Baker hal...@gmail.com wrote:
 XML was not intended to be easy to read, it was designed to be less painful 
 to work with than SGML, that is all
More to the point, it was designed to be a *markup* format.  The markup is 
metadata describing various semantic attributes of the data.  If you mark up a 
document, typically almost all the bytes are data, not metadata!

TeX and the x-roff's are markup formats, though at a low level.  LaTeX moves 
to a higher level.  The markup commands in TeX or sroff or LaTeX documents 
are typically a couple of percent of the entire file.  You can typically read 
the content, simply ignoring the markup, with little trouble.  In fact, there 
are programs around at least for TeX and LaTeX that strip out all the markup so 
that you can read the content as just plain text.  You can typically get the 
gist with little trouble.

If you look at what XML actually ended up being used for, in many cases nearly 
the entire damn document is ... markup!  The data being marked up becomes 
essentially vestigial.  Strip out the XML and nothing is left.  In and of 
itself, there may be nothing wrong with this.  But it's why I object to the use 
of markup language to describe many contemporary uses of XML.  It leads you 
to think you're getting something very different from what you actually do get. 
 (The XML world has a habit of using words in unexpected ways.  I had the 
damnedest time understanding much of the writing emanating from this world 
until I realized that when the XML world say semantics, you should read it as 
syntax.  That key unlocks many otherwise-mysterious statements.)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 10:23 AM, John Kelsey crypto@gmail.com wrote:

 On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker hal...@gmail.com
 wrote:
 ...
  Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had
 no impact on the security of certificates issued using MD5 until the attack
 was dramatically improved and the second pre-image attack became feasible.

 Just a couple nitpicks:

 a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but
 rather a collision attack from a chosen IV.


Well if we are going to get picky, yes it was a collision attack but the
paper he circulated in 1995 went beyond a collision from a known IV, he had
two messages that resulted in the same output when fed a version of MD5
where one of the constants had been modified in one bit position.



 b.  Preimages with MD5 still are not practical.  What is practical is
 using the very efficient modern collision attacks to do a kind of herding
 attack, where you commit to one hash and later get some choice about which
 message gives that hash.


I find the preimage nomencalture unnecessarily confusing and have to look
up the distinction between first second and platform 9 3/4s each time I do
a paper.



 ...
  Proofs are good for getting tenure. They produce papers that are very
 citable.

 There are certainly papers whose only practical importance is getting a
 smart cryptographer tenure somewhere, and many of those involve proofs.
  But there's also a lot of value in being able to look at a moderately
 complicated thing, like a hash function construction or a block cipher
 chaining mode, and show that the only way anything can go wrong with that
 construction is if some underlying cryptographic object has a flaw.  Smart
 people have proposed chaining modes that could be broken even when used
 with a strong block cipher.  You can hope that security proofs will keep us
 from doing that.


Yes, that is what I would use them for. But I note that a very large
fraction of the field has studied formal methods, including myself and few
of us find them to be quite as useful as the academics think them to be.

The oracle model is informative but does not necessarily need to be reduced
to symbolic logic to make a point.


 Now, sometimes the proofs are wrong, and almost always, they involve a lot
 of simplification of reality (like most proofs aren't going to take
 low-entropy RNG outputs into account).  But they still seem pretty valuable
 to me for real-world things.  Among other things, they give you a
 completely different way of looking at the security of a real-world thing,
 with different people looking over the proof and trying to attack things.


I think the main value of formal methods turns out to be pedagogical. When
you teach students formal methods they quickly discover that the best way
to deliver a proof is to refine out every bit of crud possible before
starting and arrive at an appropriate level of abstraction.

But oddly enough I am currently working on a paper that presents a
formalized approach.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 12:27 AM, David Johnston d...@deadhat.com wrote:

  On 10/1/2013 2:34 AM, Ray Dillinger wrote:

 What I don't understand here is why the process of selecting a standard
 algorithm for cryptographic primitives is so highly focused on speed. ~


 What makes you think Keccak is faster than the alternatives that were not
 selected? My implementations suggest otherwise.
 I thought the main motivation for selecting Keccak was Sponge good.


You mean Keccak is spongeworthy.


I do not accept the argument that the computational work factor should be
'balanced' in the way suggested.

The security of a system is almost always better measured by looking at the
work factor for breaking an individual message rather than the probability
that two messages might be generated in circumstances that cancel each
other out.

Given adequate cryptographic precautions (e.e. random serial), a
certificate authority can still use MD5 with an acceptable level of
security even with the current attacks. They would be blithering idiots to
do so of course but Flame could have been prevented with certain
precautions.

If a hash has a 256 bit output I know that I cannot use it in a database if
the number of records approaches 2^128. But that isn't really a concern to
me. The reason I use a 256 bit hash is because I want a significant safety
margin on the pre-image work factor.

If I was really confident that the 2^128 work factor really is 2^128 then I
would be happy using a 128 bit hash for most designs. In fact in
PRISM-Proof Email I am currently using a 226 bit Subject Key Identifier
because I can encode that in BASE64 and the result is about the same length
as a PGP fingerprint. But I really do want that 2^256 work factor.

If Keccak was weakened in the manner proposed I would probably use the 512
bit version instead and truncate.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread james hughes

On Oct 3, 2013, at 9:27 PM, David Johnston d...@deadhat.com wrote:

 On 10/1/2013 2:34 AM, Ray Dillinger wrote:
 What I don't understand here is why the process of selecting a standard 
 algorithm for cryptographic primitives is so highly focused on speed. ~
 
 What makes you think Keccak is faster than the alternatives that were not 
 selected? My implementations suggest otherwise.
 I thought the main motivation for selecting Keccak was Sponge good.

I agree: Sponge Good, Merkle–Damgård Bad. Simple enough. 

I believe this thread is not about the choice of Keccak for SHA3, it is about 
NIST's changes of Keccak for SHA3. 

[Instead of pontificating at length based on conjecture and conspiracy theories 
and smearing reputations based on nothing other than hot air] Someone on this 
list must know the authors of Keccak. Why not ask them. They are the ones that 
know the most about the algorithm, why the parameters are what they are and 
what the changes mean for their vision. 

Here is my question for them: Given the light of the current situation, what 
is your current opinion of NIST's changes of Keccak as you specified it to 
SHA-3 as NIST standardized it? 

If the Keccak authors are OK with the changes, who are we to argue about these 
chances? 

If the Keccak authors don't like the changes, given the situation NIST is in, I 
bet NIST will have no recourse but to re-open the SHA3 discussion.

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] check-summed keys in secret ciphers?

2013-10-05 Thread Phillip Hallam-Baker
On Mon, Sep 30, 2013 at 7:44 PM, arxlight arxli...@arx.li wrote:


 Just to close the circle on this:

 The Iranians used hundreds of carpet weavers (mostly women) to
 reconstruct a good portion of the shredded documents which they
 published (and I think continue to publish) eventually reaching 77
 volumes of printed material in a series wonderfully named Documents
 from the U.S. Espionage Den.

 They did a remarkably good job, considering:

 http://upload.wikimedia.org/wikipedia/commons/6/68/Espionage_den03_14.png


There is a back story to that. One of the reasons that Ayatolah Kohmenhi
knew about the CIA and embassy involvement in the 53 coup was that he was
one of the hired thugs who raised the demonstrations that toppled Mossadegh.

So the invasion of the embassy was in part motivated by a desire to burn
any evidence of that perfidy on the regimes part. It was also used to
obtain and likely forge evidence against opponents inside the regime. The
files were used as a pretext for the murder of many of the leftists who
were more moderate and western in their outlook.


On the cipher checksum operation, the construction that would immediately
occur to me would be the following:

k1 = R(s)

kv = k1 + E(k1, kd)// the visible key sent over the wire, kd is a
device key

This approach allows the device to verify that the key is intended for that
device. A captured device cannot be used to decrypt arbitrary traffic even
if the visible key is known. The attacker has to reverse engineer the
device to make use of it, a task that is likely to take months if not
years.

NATO likely does an audit of every cryptographic device every few months
and destroys the entire set if a single one ever goes missing.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-05 Thread ianG

On 4/10/13 11:17 AM, Peter Gutmann wrote:


Trying to get back on track, I think any attempt at TLS 2 is doomed.  We've
already gone through, what, about a million messages bikeshedding over the
encoding format and have barely started on the crypto.  Can you imagine any
two people on this list agreeing on what crypto mechanism to use?  Or whether
identity-hiding (at the expense of complexity/security) should trump
simplicity/security 9at the expense of exposing identity information)?



Au contraire!  I think what we have shown is that the elements in 
dispute must be found in the competition.  Not specified beforehand.


Every proposal must include its own encoding, its own crypto suite(s), 
its own identity-hiding, and dollops and dollops of simplicity.


Let the games begin!

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-05 Thread ianG

On 2/10/13 00:16 AM, James A. Donald wrote:

On 2013-10-02 05:18, Jerry Leichter wrote:

To be blunt, you have no idea what you're talking about. I worked at
Google until a short time ago; Ben Laurie still does. Both of us have
written, submitted, and reviewed substantial amounts of code in the
Google code base. Do you really want to continue to argue with us
about what the Google Style Guide is actually understood within Google?


The google style guide, among other things, prohibits multiple direct
inheritance and operator overloading, except where stl makes you do
operator overloading.



I do similar.  I prohibit reflection and serialization in java.  In C I 
used to prohibit malloc().



Thus it certainly prohibits too-clever code.  The only debatable
question is whether protobufs, and much of the rest of the old codebase,
is too-clever code - and it certainly a lot more clever than operator
overloading.


protobufs I would see as just like any external dependency -- trouble, 
and not good for security.  Like say an external logger or IPC or crypto 
library.  It would be really nice to eliminate these things but often 
enough one can't.


On the other hand, if you are not so fussed about security, then it is 
probably far better to use protobufs to stop the relearning cycle and 
reduce the incompatibility bugs across a large group of developers.




Such prohibitions also would prohibit the standard template library,
except that that is also grandfathered in, and prohibits atl and wtl.

The style guide is designed for an average and typical programmer who is
not as smart as the early google programmers.   If you prohibit anything
like wtl, you prohibit the best.


Right.  Real world is that an org has to call on the talents of a 
variety of programmers, high-end *and* aspirational, both.  So one tends 
to prohibit things that complicate the code for the bulk, and one tends 
to encourage tools that assist the majority.


I'd probably encourage things like protobufs for google.  They have a 
lot of programmers, and that tends to drive the equation more than other 
considerations.




Prohibiting programmers from using multiple inheritance is like the BBC
prohibiting the world literally instead of mandating that it be used
correctly.  It implies that the BBC does not trust its speakers to
understand the correct use of literally, and google does not trust its
programmers to understand the correct use of multiple direct inheritance.



I often wish I had some form of static multiple inheritance in Java...



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread John Kelsey
http://keccak.noekeon.org/yes_this_is_keccak.html

--John___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] A stealth redo on TLS with new encoding

2013-10-05 Thread Phillip Hallam-Baker
I think redoing TLS just to change the encoding format is to tilt at
windmills. Same for HTTP (not a fan of CORE over DTLS), same for PKIX.

But doing all three at once would actually make a lot of sense and I can
see something like that actually happen. But only if the incremental cost
of each change is negligible.


Web Services are moving towards JSON syntax. Other than legacy support I
can see no reason to use XML right now and the only reason to use
Assanine.1 other than legacy is to avoid Base64 encoding byte blobs and
escaping strings.

Adding these two features to JSON is very easy and does not require a whole
new encoding format, just add additional code points to the JSON encoding
for length encoded binary blobs. This approach means minimal changes to
JSON encoder code and allows a single decoder to be used for traditional
and binary forms:

https://datatracker.ietf.org/doc/draft-hallambaker-jsonbcd/


Web services are typically layered over HTTP and there are a few facilities
that the HTTP layer provides that are useful in a Web Service. In
particular it is very convenient to allow multiple Web Services to share
the same IP address and port. Anyone who has used the Web Server in .NET
will know what I mean here.

Web Services use some features of HTTP but not very many. It would be very
convenient if we could replace the HTTP layer with something that provides
just the functionality we need but layers over UDP or TCP directly and uses
JSON-B encoding.


One of the features I use HTTP for is to carry authentication information
on the Web Service requests and responses. I have a Web Service to do a key
exchange using SSL for privacy (its a pro-tem solution though, will add in
a PFS exchange at some point).

http://tools.ietf.org/html/draft-hallambaker-wsconnect-04

The connect protocol produces a Kerberos like ticket which is then used to
authenticate subsequent HTTP messages using a MAC.

http://tools.ietf.org/html/draft-hallambaker-httpsession-01


In my view, authentication at the transport layer is not a substitute for
authentication at the application layer. I want server authentication and
confidentiality at least at transport layer and in addition I want mutual
authentication at the application layer.

For efficiency, the authentication at the application layer uses symmetric
key (unless non-repudiation is required in which case digital signatures
would be indicated but in addition to MAC, not as a replacement).

Once a symmetric key is agreed for authentication, the use of the key for
application layer authentication is reasonably obvious.

http://tools.ietf.org/html/draft-hallambaker-wsconnect-04


OK, so far the scheme I describe is three independent schemes that are all
designed to work inside the existing HTTP-TLS-PKIX framework and they
provide value within that framework. But as I observed earlier, it is quite
possible to kick the framework away and replace HTTP with a JSON-B based
presentation layer framing.

This is what I do in the UDP transport for omnibroker as that is intended
to be a replacement for the DNS client-server interface.


So in summary, yes it is quite possible that TLS could be superseded by
something else, but that something else is not going to look like TLS and
it will be the result of a desire to build systems that use a single
consistent encoding at all layers in the stack (above the packet/session
layer).

Trying to reduce the complexity of TLS is plausible but all of that
complexity was added for a reason and those same reasons will dictate
similar features in TLS/2.0. The way to make a system simpler is not to
make each of the modules simpler but to make the modules fit together more
simply. Reducing the complexity of HTTP is hard, reducing the complexity of
TLS is hard. Reducing the complexity of HTTP+TLS is actually easier.


That said, I just wrote a spec for doing PGP key signing in Assanine.1.
Because even though it is the stupidest encoding imaginable, we need to
have a PKI that is capable of expressing every assertion type that people
have found a need for. That means either we add the functionality of PKIX
to the PGP world or vice versa.

The PKIX folk have a vast legacy code base and zero interest in compromise,
many are completely wedged on ASN.1. The PGP code base is much less
embedded than PKIX and PGP folk are highly ideologically motivated to bring
privacy to the masses rather than the specific PGP code formats.

So I have to write my key endorsement message format in Assanine.1. If I
can stomach that then so can everyone else.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread radix42
Jerry Leichter wrote:
Currently we have SHA-128 and SHA-256, but exactly why one should choose one 
or the other has never been clear - SHA-256 is somewhat more expensive, but 
I can't think of any examples where SHA-128 would be practical but SHA-256 
would not.  In practice, when CPU is thought to be an issue (rightly or 
wrongly), people have gone with RC4 - standards be damned.

SHA-224/256 (there is no SHA-128) use 32-bit words, SHA-384/512 uses 64-bit 
words. That difference is indeed a very big deal in embedded device 
applications. SHA-3 uses only 64-bit words, which will likely preclude it being 
used in most embedded devices for the foreseeable future. 

-David Mercer

David Mercer
Portland, OR

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] System level security in low end environments

2013-10-05 Thread John Gilmore
 b.  There are low-end environments where performance really does
 matter.  Those often have rather different properties than other
 environments--for example, RAM or ROM (for program code and S-boxes)
 may be at a premium.

Such environments are getting very rare these days.  For example, an
electrical engineer friend of mine was recently working on designing a
cheap aimable mirror, to be deployed by the thousands to aim sunlight
at a collector.  He discovered that connectors and wires are more
expensive than processor chips these days!  So he ended up deciding to
use a system-on-chip with a built-in radio that eliminated the need to
have a connector or a wire to each mirror.  (You can print the antenna
on the same printed circuit board that holds the chip and the
actuator.)

What dogs the security of our systems these days is *complexity*.  We
don't have great security primitives to just drop into place.  And the
ones we do have, have complicated tradeoffs that come to the fore
depending on how we compound them with other design elements (like
RNGs, protocols, radios, clocks, power supplies, threat models, etc).
This is invariant whether the system is low end or high end.

That radio controlled mirror can be taken over by a drive-by attacker
in a way that would take a lot more physical labor to mess up a
wire-controlled one.  And if the attack aimed two hundred mirrors at
something flammable, the attacker could easily start a dangerous fire
instead of making cheap solar energy.  (Denial of service is even
easier - just aim the mirrors in random directions and the power goes
away.  Then what security systems elsewhere were depending on that
power?  This might just be one cog in a larger attack.)  Some of the
security elements are entirely external to the design.  For example,
is the radio protocol one that's built into laptops by default, like
wifi or bluetooth?  Or into smartphones?  Or does it require custom
hardware?  If not, a teenager can more easily attack the mirrors --
and a corrupt government can infect millions of laptops and phones
with malware that will attack mirror arrays that they come near to.

For products that never get made in the millions, the design cost
(salaries and time) is a significant fraction of the final cost per
unit.  Therefore everybody designs unencrypted and unauthenticated
stuff, just because it's easy and predictable.

For example it's pretty easy to make the system-on-chip above send or
receive raw frames on the radio.  Harder to get it to send or receive
UDP packets (now it needs an IP address, ARP, DHCP, more storage, ...).
Much harder to get it to send or receive *authenticated* frames or UDP
packets (now it needs credentials; is it two-way authenticated, if so
it needs a way to be introduced to its system, etc).  Much harder
again to get it to send or receive *encrypted* frames or UDP packets
(now it needs keys too, and probably more state to avoid replays,
etc).  And how many EE's who could debug the simple frame sending
firmware and hardware, can debug a crypto protocol they've just
implemented (even making the dubious assumpion that they compounded
the elements in a secure way and have just made a few stupid coding
mistakes)?

John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-05 Thread Jerry Leichter
On Oct 4, 2013, at 12:20 PM, Ray Dillinger wrote:
 So, it seems that instead of AES256(key) the cipher in practice should be
 AES256(SHA256(key)).
 
 Is it not the case that (assuming SHA256 is not broken) this defines a cipher
 effectively immune to the related-key attack?

Yes, but think about how you would fit it into the question I raised:

- If this is the primitive black box that does a single block
  encryption, you've about doubled the cost and you've got this
  messy combined thing you probably won't want to call a primitive.
- If you say well, I'll take the overall key and replace it by
  its hash, you're defining a (probably good) protocol.  But
  once you're defining a protocol, you might as well just specify
  random keys and forget about the hash.

Pinning down where the primitive ends and the protocol is tricky and ultimately 
of little value.  The takeaway is that crypto algorithms have to be used with 
caution.  Even a perfect block cipher, if used in the most obvious way (ECB 
mode), reveals when it has been given identical inputs.  Which is why it's been 
argued that any encryption primitive (at some level) has to be probabilistic, 
so that identical inputs don't produce identical outputs.  (Note that this 
implies that output must always be larger then the input!) Still, we have 
attainable models in which no semantic information about the input leaks (given 
random keys).  Related key attacks rely on a different model which has nothing 
much to do with practical usage but are obvious from a purely theoretical point 
of view:  OK, we've insulated ourselves from attacks via the plaintext input, 
how about the key?

More broadly there are plenty of attacks (probably including most of the 
related key attacks; I haven't looked closely enough to be sure) that are based 
on weaknesses in key scheduling.  If you're going to make a cryptographic hash 
function a fundamental part of your block cipher, why not use it to generate 
round keys?  The only reason I know of - and in practical terms it's not a 
trivial one - is the substantial performance hit.

-- Jerry



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread James A. Donald

On 2013-10-05 16:40, james hughes wrote:

Instead of pontificating at length based on conjecture and conspiracy

 theories and smearing reputations based on nothing other than hot air

But there really is a conspiracy, which requires us to consider 
conjectures as serious risks, and people deserve to have their 
reputations smeared for the appearance of being in bed with that conspiracy.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread Jerry Leichter
On Oct 5, 2013, at 11:54 AM, radi...@gmail.com wrote:
 Jerry Leichter wrote:
 Currently we have SHA-128 and SHA-256, but exactly why one should choose 
 one or the other has never been clear - SHA-256 is somewhat more 
 expensive, but I can't think of any examples where SHA-128 would be 
 practical but SHA-256 would not.  In practice, when CPU is thought to be an 
 issue (rightly or wrongly), people have gone with RC4 - standards be 
 damned.
 
 SHA-224/256 (there is no SHA-128) use 32-bit words, SHA-384/512 uses 64-bit 
 words. That difference is indeed a very big deal in embedded device 
 applications. SHA-3 uses only 64-bit words, which will likely preclude it 
 being used in most embedded devices for the foreseeable future. 
Oops - acronym confusion between brain and keyboard.  I meant to talk about 
AES-128 and AES-256.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread james hughes
On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

From the authors: NIST's current proposal for SHA-3 is a subset of the Keccak 
family, one can generate the test vectors for that proposal using the Kecca 
kreference code. and this shows that the [SHA-3] cannot contain internal 
changes to the algorithm.

The process of setting the parameters is an important step in standardization. 
NIST has done this and the authors state that this has not crippled the 
algorithm. 

I bet this revelation does not make it to Slashdot… 

Can we put this to bed now? 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread james hughes

On Oct 2, 2013, at 7:46 AM, John Kelsey crypto@gmail.com wrote:

 Has anyone tried to systematically look at what has led to previous crypto 
 failures?  T

In the case we are now, I don't think that it is actually crypto failures 
(RSA is still secure, but 1024 bit is not. 2048 DHE is still secure, but no one 
uses it, AES is secure, but not with an insecure key exchange) but standards 
failures. These protocol and/or implementation failures are either because the 
standards committee said to the cryptographers prove it (the case of WEP) and 
even when an algorithm is dead, they refuse to deprecate it (MD5 certificate 
mess) or just use bad RND (too many examples to cite). 

The antibodies in the standards committees need to read this and think about it 
really hard. 

 (1)  Overdesign against cryptanalysis (have lots of rounds)
 (2)  Overdesign in security parameters (support only high security levels, 
 use bigger than required RSA keys, etc.) 
 (3)  Don't accept anything without a proof reducing the security of the whole 
 thing down to something overdesigned in the sense of (1) or (2).

and (4) Assume algorithms fall faster than Moore's law and, in the standard, 
provide a sunset date.

I completely agree. 


rhetoric
The insane thing is that it is NOT the cryppies that are complaining about 
moving to RSA 2048 and 2048 bit DHE, it is the standards wonks that complain 
that a 3ms key exchange is excessive. 

Who is the CSO of the Internet? We have Vince Cerf,  Bob Kahn or Sir Tim, but 
what about security? Who is responsible for the security of eCommerce? Who will 
VISA turn to? It was NIST (effectively). Thank you NSA, because of you NIST now 
has lost most of its credibility. (Secrets are necessary, but many come to 
light over time. Was the probability of throwing NIST under the bus 
[http://en.wikipedia.org/wiki/Throw_under_the_bus] part of the challenge in 
finesse? Did NSA consider backing down when the Shumow, Ferguson presentation 
(which Schneier blogged about) came to light in 2007?).  

We have a mess. Who is going to lead? Can the current IETF Security Area step 
into the void? They have cryptographers on the Directorate list, but history 
has shown that they are not incredibly effective at implementing a 
cryptographic vision. One can easily argue that vision is rarely provided by a 
committee oversight committee. 
/rhetoric


John: Thank you. These are absolutely the right criteria. 

Now what? 

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography