Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-06 Thread John Levine
They can't be as anonymous as cash if the party being dealt with
can be identified.  And the party can be identified if the
transaction is online, real-time.  Even if other clues are erased,
there's still traffic analysis in this case.

If I show up at a store and pay cash for something every week, they
can still do traffic analysis on me (oh him, he's a regular
customer) unless I go out of my way to obscure my routine like asking
other people to buy stuff for me.

It's not clear to me what the object of this argument is.  Yes, the
harder you work, the more difficult you can make it for other people
to tie your transactions to you.  This shouldn't be news to anyone.

R's,
John


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Interesting editorial comment on security vs. privacy

2008-02-06 Thread dan

Udhay Shankar N writes:
-+-
 | http://www.claybennett.com/pages/security_fence.html
 | 


Earlier this week, I heard Dr. Donald Kerr, Principal
Deputy Director, ODNI, say that the greatest challenge
of the next (U.S.) administration would be a fundamental
re-thinking of the inter-relation of security  privacy.

--dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread James A. Donald

James A. Donald wrote:
 I have figured out a solution, which I may post here
 if you are interested.

Ian G wrote:
 I'm interested.  FTR, zooko and I worked on part of
 the problem, documented briefly here:
 http://www.webfunds.org/guide/sdp/index.html

I have posted How to do VPNs right at
http://jim.com/security/how_to_do_VPNs.html

It covers somewhat different ground to that which your
page covers, focusing primarily on the problem of
establishing the connection.

humans are not going to carry around large
strong secrets every time either end of the
connection restarts.  In fact they are not going
to transport large strong secrets any time ever,
which is the flaw in SSL and its successors such
as IPSec and DTLS

What humans are going to do, and what the user
interface must support, and the cryptography
somehow make secure, is set up a username and a
rather short password, and enter that password
on request - rather too easily enter it on
request without necessarily checking who they
are giving it to.  Our security has to work with
humans as they are, and make what humans are
naturally inclined to do secure, rather than try
to change what humans are naturally inclined to
do.

It covers the cryptography of packets only to the depth
needed to establish the required properties of sessions:
each packet within a session must have its own
unique IV (nonce), and each session must have
its own symmetric encryption secret and
authentication secret.  We have to have a new
session every client restart, every server
restart, and every 2^64 bytes.  At the beginning
of each new session, new strong secrets, large
truly random numbers, have to be negotiated for
symmetric encryption and authentication.

My page completely ignores the routing issue, another
hard problem which existing VPNs frequently do wrongly,
or not at all.  It presupposes the existence of good
random number sources.

It does not address the question of denial of service
attacks against the session establishment protocol,
though I have written that up elsewhere, and will
publish that shortly.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Sun, Feb 03, 2008 at 09:24:48PM +1000, James A. Donald wrote:
 Nicolas Williams wrote:
 What, specifically, are you proposing?
 
 I am still writing it up.
 
  Running the web over UDP?
 
 In a sense.
 
 That should have been done from the beginning, even before security 
 became a problem.  TCP is a poor fit to a transactional protocol, as the 
 gyrations with Keep-alive and its successors illustrate.

In the beginning most pages were simple enough that to speak of
transactional protocol is almost an exageration.  Web techonologies
grew organically.  Solutions to the various resulting problems will, I
bet, also grow organically.

A complete revamping is probably not in the cards.  But if one should be
then it should not surprise you that I'm all in favor of piercing
abstraction layers.  User authentication should happen that the
application layer, and session crypto should happen at the transport
layer, with everything cryptographically bound up.  In any case we
should re-use what we know works (e.g., ESP/AH for transport session
crypto, IKEv2/TLS/DTLS for key exchange, ...).

 In rough summary outline, what I propose is to introduce a distinction 
 between connections and streams, that a single long lasting connection 
 contains many transient streams.  This is equivalent to TCP in the case 
 that a single connection always contains exactly two streams, one in 
 each direction, and the two streams are created when the connection is 
 created and shut down when the connection is shut down, but the main 
 objective is to support usages that are not equivalent to TCP. This is 
 pretty much the same thing as T/TCP, except that a connection can have 
 a large shared secret associated with it to encrypt the streams.  For an 
 unencrypted connection, it can be spoof flooded the same way as T/TCP 
 can be spoof flooded, 

Sounds a bit like SCTP, with crypto thrown in.

   but the main design objective is to make 
 encryption efficient enough that one always encrypts everything.

I thought it was the latency cause by unnecessary round-trips and
expensive key exchange crypto that motivated your proposal.  The cost of
session crypto is probably not as noticeable as that of the latency of
key exchange and authentication.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Peter Gutmann
Guus Sliepen [EMAIL PROTECTED] writes:

Peter sent us his write-up up via private email a few days before he posted
it to this list (which got it on Slashdot). I had little time to think about
the issues he mentioned before his write-up became public.

I should provide some background for the writeup, it started when someone sent
me a link to some VPN software they were using and asked whether it was
actually secure.  I looked at it, found that it was, well, pretty awful, and
told them so.

So they sent me a link to another VPN app.  I had a look at it and it was just
as bad.

By this time it'd turned into an ongoing discussion/attempt to track down some
sort of decent easy-to-use secure-VPN app.  The more we found, the more
discouraged I became.  Initially we'd tried to contact developers but didn't
get much (if any) response, so that towards the end (after getting to the n-th
broken VPN app), to quote the VAX assembler manual, little sympathy was
extended.  After the initial writeup ended up on Slashdot I did a bit more
googling and found out that some of the problems had been pointed out by
others years before I noted them with no action from the application authors
to fix anything.  This, again, didn't inspire much confidence.

In terms of problems, it wasn't just the homebrew crypto mechanisms, there
were also numerous problems with careless implementations.  One thing that was
very common was to find very little error- or sanity-checking.  Function
return calls weren't checked, critical errors like crypto failures were logged
but the app continued anyway (!!), operations were assumed to have succeeded
at all times, even minor things like checking for an error return with a check
for '== -1' when the function could also fail with a return status of zero (so
only some failures were caught and the code could continue with ininitialised
crypto), the list just went on and on.

When tinc 2.0 will ever come out (unfortunately I don't have a lot of time to
work on it these days), it will probably use the GnuTLS library and
authenticate and connect daemons with TLS. For performance reasons, you want
to tunnel network packets via UDP instead of TCP, so hopefully there is a
working DTLS implementation as well then.

I think OpenVPN took the right approach here, they took the part of IPsec that
works well (the ESP transport mechanism) and bolted on the TLS handshake to
replace IKE (DTLS has only appeared quite recently).  They didn't have to
invent their own mechanisms for anything, but took tried-and-tested crypto
mechanisms and code and just went with that.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Peter Gutmann
Ian G [EMAIL PROTECTED] writes:
James A. Donald wrote:
 I have been considering the problem of encrypted channels over UDP or
 IP.  TLS will not work for this, since it assumes and provides a
 reliable, and therefore non timely channel, whereas what one wishes to
 provide is a channel where timeliness may be required at the expense of
 reliability.

This is what Guus was getting at:

- We needed to tunnel data over UDP, with UDP semantics. SSL requires a
  reliable stream. Therefore, we had to use something other that SSL to
  tunnel data.

This is where the OpenVPN developers got it right: Use TLS for the handshake
and IPsec's ESP for the transport.  It's been a solved problem for some years
now.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Peter Gutmann
Steven M. Bellovin [EMAIL PROTECTED] writes:
On Fri, 01 Feb 2008 13:29:52 +1300
[EMAIL PROTECTED] (Peter Gutmann) wrote:
 Actually it doesn't even require X.509 certs.  TLS-SRP and TLS-PSK
 provide mutual authentication of client and server without any use of
 X.509.  The only problem has been getting vendors to support it,
 several smaller implementations support it, it's in the (still
 unreleased) OpenSSL 0.99, and the browser vendors don't seem to be
 interested at all, which is a pity because the mutual auth (the
 server has to prove possession of the shared secret before the client
 can connect) would significantly raise the bar for phishing attacks.

 (Anyone have any clout with Firefox or MS?  Without significant
 browser support it's hard to get any traction, but the browser
 vendors are too busy chasing phantoms like EV certs).

The big issue is prompting the user for a password in a way that no one will
confuse with a web site doing so.

HCI people have been studying this for quite some time, and there's been a lot
of good work done in this area.  Because of the amount of information, I'll
answer indirectly via a link (warning, it's a partial book draft and is
currently ~140 pages long):

http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf

Even without this detailed analysis, one of the Mac browsers (Safari?) already
has a quite distinctive password prompt that rolls down out of the menu bar at
the top.  Sure, you can spoof that if you own the browser, but if malware owns
your browser then you're toast anyway.

It might have been the right thing, once upon a time, but the horse may be
too far out of the barn by now to make it worthwhile closing the barn door.

That's the response I got from a browser developer when I talked about this
about a year ago, Sufficiently sophisticated malware can spoof any piece of
browser UI, so let's just give up and admit that the phishers have won.  At
the moment, after 15-odd years of work, the state of the art for both major
secure-channel protocols is to connect to anything listening on port 22 or 443
and then hand over the user's password in plaintext form (although inside a
secure tunnel, as if that made any difference) [0].  This is only just barely
better than the 1970s-era telnet in that the authenticator is still handed
over in plaintext, but at least you can't capture it with a packet sniffer.
Moving to a challenge-response mechanism (which PSK and SRP aren't really,
it's more a bit-commitment since there's no real challenge or response process
[1]) would at least move the security into the late 1980s.

As a side-note, I was talking to a security person from a large (multi-
national) bank recently and they mentioned that they were slowing down on the
push to move to two-factor auth (real two-factor auth with SecurIDs and the
like, not the gimmicks that US banks are using :-) because the problem isn't
authenticating the user, it's authenticating the server and/or the
transaction, and most two-factor auth tokens can't do that.  As a result
they're not going to commit to sinking much more money into something that
doesn't actually solve the problem.  So mutual client/server auth is something
that's of concern to more than just some geeks on security mailing lists, it's
coming onto the radar of large financial institutions as well.

Peter.

[0] By 443 I mean HTTP over SSL/TLS, obviously.

[1] Actually this is neither challenge-response nor bit-commitment so in the
absence of anything better I'll propose failsafe authentication because
the other side doesn't get your authenticator unless they can prove they
already possess it.  In other words if the authentication process fails,
it fails safe.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Peter Gutmann
' =JeffH ' [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] said:
 http://www.xml-dev.com/blog/index.php?action=viewtopicid=196

thanks, but that doesn't actually answer my first question. It only documents
that a and b (alice and bob) arrive at the ZZ value independently. My question
is actually concerning section 2.1.2 Generation of Keying Material in
RFC2631.

I'm going to approach the answer somewhat differently: Why are you using this
mechanism?  The only reason that it's present in the spec is politics, it
being an attempt to avoid the RSA patent.  Its adoption was severely hampered
by the fact that US vendors already had RSA licenses, non-US vendors didn't
care (and in any case the patent has now expired, so they care even less), no
CA's of note will issue X9.42 certificates, and even if they did almost no
S/MIME implementations support it.  Although X9.42 was at one point listed as
mandatory to implement for S/MIME v3, the approach that was taken by most
vendors was to vaguely pretend to support X9.42 while actually concentrating
on RSA, knowing that noone else supported it either (AFAIK only two vendors
ever really supported it, Microsoft had a receive-only implementation so that
no-one could accuse them of not being compliant with the spec, and the S/MIME
Freeware Library (which was the reference implementation and therefore had no
choice in supporting it) supported it because it had to).  A few years after
the expiry of the RSA patent, the matter was corrected by changing the
standard so that vendors were no longer required to even pretend to support
X9.42.  My comments at the time were:

-- Snip --

How about trying to make the spec at least vaguely approximate reality in the
choice of algorithms?  It doesn't really matter if you include requirements
like MUST DSA OR WE WILL KILL YOU[0], SHOULD NOT RSA, in practice the world
will interpret it as MUST RSA, MAY DSA, SHOULD NOT X9.42 DH, BWAHAHAHAHAHA
X9.31 RSA no matter what it says in the RFC.

I've been sitting here watching this debate go on and on, but since no matter
what you put in the RFC the market will interpret it as MUST RSA and various
levels of deprecation for anything else maybe we could get Markov Chaney to
continue the debate for a while just for forms sake and then after enough
messages have been exchanged to satisfy everyone either put text in the RFC
which accepts what everyone's going to do anyway or which specifies all sorts
of options and alternatives secure in the knowledge that implementors will
ignore it and do what the market wants/expects, which ain't DSA or X9.42 or
X9.31 RSA.

Peter.

[0] RFC 2026bis, When MUST just isn't enough.

-- Snip --

So by implementing this you're getting an unwanted orphan crypto mechanism
that was only added for political reasons.  Are you sure you don't want to use
RSA instead?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Peter Gutmann
Eric Rescorla [EMAIL PROTECTED] writes:

I don't propose to get into an extended debate about whether it is better to
use SRTP or to use generic DTLS. That debate has already happened in IETF and
SRTP is what the VoIP vendors are doing. However, the good news here is that
you can use DTLS to key SRTP (draft-ietf-avt-dtls-srtp), so there's no need
to invent a new key management scheme.

Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as well as
OpenVPN using ESP with TLS keying), I wonder if it's worth unbundling the key
exchange from the transport?  At the moment there's (at least):

  TLS-keying --+-- TLS transport
   |
   +-- DTLS transport
   |
   +-- IPsec (ESP) transport
   |
   +-- SRTP transport
   |
   +-- Heck, SSH transport if you really want

Is the TLS handshake the universal impedance-matcher of secure-session
mechanisms?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-06 Thread Ivan Krstić

On Feb 1, 2008, at 9:34 PM, Ian G wrote:
* Browser vendors don't employ security people as we know them on  
this mailgroup [...]  But they are completely at sea when it comes  
to systemic security failings or designing new systems.


I don't know about other browsers, but Mozilla's CSO-type is Window  
Snyder who I'd easily describe as a pretty top-notch security person.


--
Ivan Krstić [EMAIL PROTECTED] | http://radian.org
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Ivan Krstić

On Jan 31, 2008, at 10:32 PM, Richard Salz wrote:
Developers working in almost any field should know the history and  
best
practices -- is PGP's original bass o matic any more important  
than the
code in a defibrillator? -- but this is not the way our field works  
right

now.  Compare it to something like civil engineering or architecture.



I think this misses the point. Security is different.

In 2008, I can learn to build pretty good suspension bridges by  
learning the state of the art of bridge-building. After that, as long  
as I live, I run almost no risk of Newtonian mechanics being shown to  
be wrong for any value of wrong that would make me go well, wow, I no  
longer understand how to build bridges.


In other words, people who build bridges these days can give you a  
convincing presentation, based on solid physics and a highly-complete  
threat model (soil erosion, material failure, etc) that their bridge  
will do its job. They can say this bridge will work because it  
satisfies well-understood and reasonably immutable laws of nature.


People who attempt to build secure systems have no ultimately well- 
understood (let alone immutable!) requirements to design against. A  
good approximation is a secure system is one that survives all  
relevant attacks that people in our field have come up with thus far,  
but it's clear that a system successfully meeting that goal can simply  
cease to meet it any given day. Thus unlike with bridges, you  
fundamentally can't evaluate the quality of a security system you  
built if you're unfamiliar with the state of the art of _attacks_  
against security systems, and you can't become familiar with those  
unless you realize that these attacks have each brought down a system  
previously considered impregnable. And if by the time you've gone  
through dozens of broken systems and their corresponding attacks you  
still think you're smart enough to write a new system by yourself,  
you're either very brave or very daft.


Neither of those mean you're a bad person, but both mean you shouldn't  
be designing security systems.


--
Ivan Krstić [EMAIL PROTECTED] | http://radian.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-06 Thread Peter Gutmann
Frank Siebenlist [EMAIL PROTECTED] writes:

That's actually a sad observation.

I keep telling my colleagues that this technology is coming any day now to
a browser near you - didn't realize that that there was no interest with the
browser companies to add support for this...

I know of a number of organisations (mostly governmental, but also some
financial) in various countries who are really, really keen to get support for
(as James Donald pointed out) cryptographically secured relationships (not
requiring PKI would be a big feature) into browsers, but no-one knows who to
beat over the head about it.  The last group I talked to (banks) were hoping
to use commercial pressure to get MS to add support for it in IE7^H^H8 at
which point Firefox would be forced to follow, but it's a slow process.

Why do the browser companies not care?
What is the adoption issue?
Still the dark cloud of patents looming over it?
Not enough understanding about the benefits? (marketing)
Economic reasons that we wouldn't buy anymore server certs?

I think it's a combination of two factors:

1. Everyone knows that passwords are insecure, so it's not worth trying to do
   anything with them.

   (My counter-argument to this is that passwords are only insecure because
   protocol designers have chosen to make them insecure, see my previous post
   about the quaint 1970s-vintage hand-over-the-password model used by SSH and
   SSL/TLS).

2. If you add failsafe authentication to browsers, CAs become redundant.

   (My counter-argument to this is to ask whether browser security exists in
   order to provide a business model for CAs or to protect users.  Currently
   it seems to be the former, with EV certs being a prime example).

There are probably other contributory reasons as well.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Leichter, Jerry
Commenting on just one portion:
| 2. VoIP over DTLS
| As Perry indicated in another message, you can certainly run VoIP
| over DTLS, which removes the buffering and retransmit issues 
| James is alluding to. Similarly, you could run VoIP over IPsec
| (AH/ESP). However, for performance reasons, this is not the favored
| approach inside IETF.
| 
| The relevant issue here is packet size. Say you're running a 
| low bandwidth codec like G.729 at 8 kbps. If you're operating at
| the commonly used 50 pps, then each packet is 160 bits == 20 bytes.
| The total overhead of the IP, UDP, and RTP headers is 40 bytes,
| so you're sending 60 byte packets. 
| 
| - If you use DTLS with AES in CBC mode, you have the 4 byte DTLS
|   header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC
|   mode), plus 2 bytes of padding to bring you up to the AES block
|   boundary: DTLS adds 32 bytes of overhead, increasing packet
|   size by over 50%. The IPsec situation is similar.
| 
| - If you use CTR mode and use the RTP header to form the initial
|   CTR state, you can remove all the overhead but the MAC itself,
|   reducing the overhead down to 10 bytes with only 17% packet
|   expansion (this is how SRTP works)
If efficiency is your goal - and realistically it has to be *a* goal -
then you need to think about the semantics of what you're securing.  By
the nature of VOIP, there's very little semantic content in any given
packet, and because VOIP by its nature is a real-time protocol, that
semantic content loses all value in a very short time.  Is it really
worth 17% overhead to provide this level of authentication for data that
isn't, in and of itself, so significant?  At least two alternative
approach suggest themselves:

- Truncate the MAC to, say, 4 bytes.  Yes, a simple brute
force attack lets one forge so short a MAC - but
is such an attack practically mountable in real
time by attackers who concern you?

- Even simpler, send only one MAC every second - i.e.,
every 50 packets, for the assumed parameters.
Yes, an attacker can insert a second's worth
of false audio - after which he's caught.  I
suppose one could come up with scenarios in
which that matters - but they are very specialized.
VOIP is for talking to human beings, and for
human beings in all but extraordinary circumstances
a second is a very short time.

  If you don't like 1 second, make this configurable.  Even
dropping it to 1/10 second and sticking to DTLS
(with a modification, of course) drops your overhead
to 5% - and 1/10 second isn't even enough time to
insert a no into the stream.  For many purposes,
a value of 10 seconds - which reduces the overhead to
an insignificant level - is probably acceptable.

It's great to build generic encrypted tunnels that provide strong
security guarantees regardless of what you send through them - just as
it's great to provide generic stream protocols like TCP that don't care
what you use them for.  The whole point of this discussion has been
that, in some cases, the generic protocols aren't really what you need:
They don't provide quite the guarantees you need, and they impose
overhead that may be unacceptable in some cases.  The same argument
applies to cryptographic algorithms.  Yes, there is a greater danger if
cryptographic algorithms are misused:  Using TCP where it's inappropri-
ate *usually* just screws up your performance, while an inappropriate
cryptographic primitive may compromise your security.  Of course, if you
rely on TCP's reliablity in an inappropriate way, you can also get
into serious trouble - but that's more subtle and rare.  Then again,
actually mounting real attacks against some of the cryptographic
weaknesses we sometimes worry about is also pretty subtle and rare.

The NSA quote someone - Steve Bellovin? - has repeated comes to mind:
Amateurs talk about algorithms.  Professionals talk about economics.
Using DTLS for VOIP provides you with an extremely high level of
security, but costs you 50% packet overhead.  Is that worth it to you?
It really depends - and making an intelligent choice requires that
various alternatives along the cost/safety curve actually be available.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Eric Rescorla
At Mon, 4 Feb 2008 09:33:37 -0500 (EST),
Leichter, Jerry wrote:
 
 Commenting on just one portion:
 | 2. VoIP over DTLS
 | As Perry indicated in another message, you can certainly run VoIP
 | over DTLS, which removes the buffering and retransmit issues 
 | James is alluding to. Similarly, you could run VoIP over IPsec
 | (AH/ESP). However, for performance reasons, this is not the favored
 | approach inside IETF.
 | 
 | The relevant issue here is packet size. Say you're running a 
 | low bandwidth codec like G.729 at 8 kbps. If you're operating at
 | the commonly used 50 pps, then each packet is 160 bits == 20 bytes.
 | The total overhead of the IP, UDP, and RTP headers is 40 bytes,
 | so you're sending 60 byte packets. 
 | 
 | - If you use DTLS with AES in CBC mode, you have the 4 byte DTLS
 |   header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC
 |   mode), plus 2 bytes of padding to bring you up to the AES block
 |   boundary: DTLS adds 32 bytes of overhead, increasing packet
 |   size by over 50%. The IPsec situation is similar.
 | 
 | - If you use CTR mode and use the RTP header to form the initial
 |   CTR state, you can remove all the overhead but the MAC itself,
 |   reducing the overhead down to 10 bytes with only 17% packet
 |   expansion (this is how SRTP works)
 If efficiency is your goal - and realistically it has to be *a* goal -
 then you need to think about the semantics of what you're securing.  By
 the nature of VOIP, there's very little semantic content in any given
 packet, and because VOIP by its nature is a real-time protocol, that
 semantic content loses all value in a very short time.  Is it really
 worth 17% overhead to provide this level of authentication for data that
 isn't, in and of itself, so significant?  At least two alternative
 approach suggest themselves:

   - Truncate the MAC to, say, 4 bytes.  Yes, a simple brute
   force attack lets one forge so short a MAC - but
   is such an attack practically mountable in real
   time by attackers who concern you?

In fact, 32-bit authentication tags are a feature of
SRTP (RFC 3711). 



   - Even simpler, send only one MAC every second - i.e.,
   every 50 packets, for the assumed parameters.
   Yes, an attacker can insert a second's worth
   of false audio - after which he's caught.  I
   suppose one could come up with scenarios in
   which that matters - but they are very specialized.
   VOIP is for talking to human beings, and for
   human beings in all but extraordinary circumstances
   a second is a very short time.

Not sending a MAC on every packet has difficult interactions with
packet loss. If you do the naive thing and every N packets send a MAC
covering the previous N packets, then if you lose even one of those
packets you can't verify the MAC. But since some packet loss is
normal, an attacker can cover their tracks simply by removing one out
of every N packets.

Since (by definition) you don't have a copy of the packet you've lost,
you need a MAC that survives that--and is still compact. This makes
life rather more complicated. I'm not up on the most recent lossy
MACing literature, but I'm unaware of any computationally efficient
technique which has a MAC of the same size with a similar security
level. (There's an inefficient technique of having the MAC cover
all 2^50 combinations of packet loss, but that's both prohibitively
expensive and loses you significant security.)


 The NSA quote someone - Steve Bellovin? - has repeated comes to mind:
 Amateurs talk about algorithms.  Professionals talk about economics.
 Using DTLS for VOIP provides you with an extremely high level of
 security, but costs you 50% packet overhead.  Is that worth it to you?
 It really depends - and making an intelligent choice requires that
 various alternatives along the cost/safety curve actually be available.

Which there are, as indicated above and in my previous message. 

-Ekr



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread ' =JeffH '
Ok thanks, I'm going to risk pedanticism in order to nail things down a bit 
more rigorously..

' =JeffH ' [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] said:
 http://www.xml-dev.com/blog/index.php?action=viewtopicid=196

thanks, but that doesn't actually answer my first question. It only documents
that a and b (alice and bob) arrive at the ZZ value independently. My 
question
is actually concerning section 2.1.2 Generation of Keying Material in
RFC2631.

[EMAIL PROTECTED] said:
  I'm going to approach the answer somewhat differently: Why are you using
 this mechanism?

Are you referring to the above mentioned mechanism of arriving at the ZZ value 
independently, which is implied in RFC2631?

(btw, I am not myself designing anything at this time that uses DH, I'm 
reviewing/analyzing. I am _not_ reviewing RFC2630/2631 themselves, rather it's 
a (non-IETF) spec that references 2631)


  The only reason that it's present in the spec is politics,
 it being an attempt to avoid the RSA patent.

So by the spec you're referring to RFC2631 here?

Or are you referring to X9.42?

Or something else?


  Its adoption was severely
 hampered by the fact that US vendors already had RSA licenses, non-US vendors
 didn't care (and in any case the patent has now expired, so they care even
 less), no CA's of note will issue X9.42 certificates, and even if they did
 almost no S/MIME implementations support it.

snippage/

So here, and in the snippage, are you referring to X9.42 itself, or CMS 
(Cryptographic Message Syntax) ?


  A few years after the expiry of the RSA patent, the matter was corrected by
 changing the standard so that vendors were no longer required to even pretend
 to support X9.42.  My comments at the time were:

Exactly which standard ?  From grepping all RFCs, it seems you're referring 
to CMS when you say the standard, which has indeed been revised a few times 
since RFC2630.

thanks,

=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Joseph Ashwood
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]

Sent: Saturday, February 02, 2008 12:56 PM
Subject: Re: questions on RFC2631 and DH key agreement

If a purportedly secure protocol employing a nominal DH exchange in 
order to

establish a shared secret key between a requester and responder, employs
widely known published (on the web) fixed values for g (2) and p (a
purportedly prime 1040 bit number) for many of it's implementations and
runtime invocations, what are the risks its designers are assuming with
respect to the resultant properties of ZZ?


It is assuming that the total value of the data protected by those 
parameters never crosses the cost to break in the information value 
lifetime. For 1040 bits this is highly questionable for any data with a 
lifetime longer than 6 months.


I suspect that many implementations will simply use the equivalent of 
whatever
rand() function is available to get the bits for their private keys 
directly,


Very bad idea, for two reasons, the rand() function does not have sufficient 
internal state, and the rand() function is far from cryptographically 
strong.



and will likely not reallocate private keys unless the implementation or
machine are restarted. So if the random number generator has known flaws, 
then

there may be some predictability in both the public keys and in ZZ, yes?


All flaws in the private key generator will show in the public key 
selection, do yes.



Additionally there's the previously noted issue with the values of static
private keys slowly leaking.


Only if the value of p changes, if the value of p remains static, then the 
private key doesn't leak. A simple proof of this is simple, Eve can easily, 
and trivially generate any number of public/private key pairs and thereby 
generate any number of viewable sets to determine the unknown private key.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Martin James Cochran

Comments inline.

On Feb 3, 2008, at 5:56 PM, Eric Rescorla wrote:



- If you use DTLS with AES in CBC mode, you have the 4 byte DTLS
header, plus a 16 byte IV, plus 10 bytes of MAC (in truncated MAC
mode), plus 2 bytes of padding to bring you up to the AES block
boundary: DTLS adds 32 bytes of overhead, increasing packet
size by over 50%. The IPsec situation is similar.

- If you use CTR mode and use the RTP header to form the initial
CTR state, you can remove all the overhead but the MAC itself,
reducing the overhead down to 10 bytes with only 17% packet
expansion (this is how SRTP works)



Depending on the lifetime of the keys involved, you can probably  
truncate the MAC tags much more than this.  Using the RTP counter for  
use in some appropriate stateful MAC may mean a 3- or 4-byte tag is  
enough security.  Additionally, in order to conserve bandwidth you  
might want to make a trade-off where some packets may be forged with  
small probability (in the VOIP case, that means an attacker gets to  
select a fraction of a second of sound, which is probably harmless),  
but it is hard to forge many packets.


In (http://eprint.iacr.org/2006/095), John Black and I treat this  
model in depth, and suggest a MAC scheme which may be most appropriate  
for this scenario.  A stateful, highly-truncated HMAC will also work  
fine, but is slower than the scheme we propose.


Martin Cochran
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread James A. Donald

Nicolas Williams wrote:
 Sounds a bit like SCTP, with crypto thrown in.

SCTP is what we should have done http over, though of
course SCTP did not exist back then.  Perhaps, like
quite a few other standards, it still does not quite
exist.

 I thought it was the latency cause by unnecessary
 round-trips and expensive key exchange crypto that
 motivated your proposal.  The cost of session crypto
 is probably not as noticeable as that of the latency
 of key exchange and authentication.

The big problem is that between the time one logs on to
one's bank, and the time one logs off, one is apt to
have done lots and lots of cryptographic key exchanges.
One key exchange per customer session is a really small
cost, but we have a storm of them.

Whenever the web page shows what is particular to the
individual rather than universal, it uses a session
cookie, visible to server side web page code.
Encryption, the bundle of shared secrets that enable
encrypted communications, should be visible at that
level, should be a session cookie characteristic rather
than a low level transport characteristic, should have
the durability and scope of a session cookie, instead of
the durability and scope of a transaction.

Because we use encryption merely at a level where it is
logically transient, because it protects transactions
rather than relationships, the connections are too
costly, and fail to provide the information about
relationships that are needed to protect the user.

If we had implemented http over something like SCTP,
then an SCTPlike connection value should have been a
cookie.  One should have been able to look at the
SCTPlike connection value in the server side page code,
and be pretty sure that if the person is the same, the
connection value will be unchanged, so that one could
then associate additional state with the connection
value - encryption being some more state.

Encryption parameters have more in common with session
cookies than with transactions.  They should be about
relationships, not data transport.

If encryption setups were made and discarded only as
often as session cookies, not so costly.  It is making
them and discarding them as often as transactions that
hurts. Also, the fact that they are so frequently
discarded means that scope information is unavailable to
secure relationships, means we cannot provide useful
information to the end user about who he is really
talking to, because the encryption does not know about
relationships, even though encryption should be about
relationships.

With encryption merely at the transactional level, the
browser can know the true name of website you are
looking at, that being merely a page property, but
cannot know what relationship you think you are
participating in.  To provide security, client side
code, browser chrome, needs to know not the true name of
the web site, but if you are at a web site where you
have user name or durable user ID.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-06 Thread Nicolas Williams
On Tue, Feb 05, 2008 at 08:17:32AM +1000, James A. Donald wrote:
 Nicolas Williams wrote:
  Sounds a bit like SCTP, with crypto thrown in.
 
 SCTP is what we should have done http over, though of
 course SCTP did not exist back then.  Perhaps, like
 quite a few other standards, it still does not quite
 exist.

Proposing something new won't help make that available sooner than SCTP
if that something new, like SCTP, must be implemented in kernel-land.

  I thought it was the latency cause by unnecessary
  round-trips and expensive key exchange crypto that
  motivated your proposal.  The cost of session crypto
  is probably not as noticeable as that of the latency
  of key exchange and authentication.
 
 The big problem is that between the time one logs on to
 one's bank, and the time one logs off, one is apt to
 have done lots and lots of cryptographic key exchanges.
 One key exchange per customer session is a really small
 cost, but we have a storm of them.

This is what session resumption is all about, and now that we have a way
to do it without server-side state (RFC4507) there should be no more
complaints.

If the latency of multiple key exchanges is the issue then we should
push for deployment of RFC4507 before we go push for a brand new
transport protocol.

 Whenever the web page shows what is particular to the
 individual rather than universal, it uses a session
 cookie, visible to server side web page code.
 Encryption, the bundle of shared secrets that enable
 encrypted communications, should be visible at that
 level, should be a session cookie characteristic rather
 than a low level transport characteristic, should have
 the durability and scope of a session cookie, instead of
 the durability and scope of a transaction.

If I understand what you mean then the ticket in RFC4507 is just that.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread ' =JeffH '
I'd scrawled:
  If a purportedly secure protocol employing a nominal DH exchange in 
  order to
  establish a shared secret key between a requester and responder, employs
  widely known published (on the web) fixed values for g (2) and p (a
  purportedly prime 1040 bit number) for many of it's implementations and
  runtime invocations, what are the risks its designers are assuming with
  respect to the resultant properties of ZZ?

Joseph Ashwood graciously replied:
 
 It is assuming that the total value of the data protected by those 
 parameters never crosses the cost to break in the information value 
 lifetime. 

yes.


  I suspect that many implementations will simply use the equivalent of 
  whatever
  rand() function is available to get the bits for their private keys 
  directly,
 
 Very bad idea, for two reasons, the rand() function does not have sufficient 
 internal state, and the rand() function is far from cryptographically 
 strong.

what about just using bytes from /dev/urandom on *nix?


  and will likely not reallocate private keys unless the implementation or
  machine are restarted. So if the random number generator has known flaws, 
  then
  there may be some predictability in both the public keys and in ZZ, yes?
 
 All flaws in the private key generator will show in the public key 
 selection, do yes.
 ^^
 so?


 
  Additionally there's the previously noted issue with the values of static
  private keys slowly leaking.
 
 Only if the value of p changes, if the value of p remains static, then the 
 private key doesn't leak.

Ok, I can see that from ya = g ^ xa mod p  ...  ya doesn't change if g, xa, 
and p don't change.


 A simple proof of this is simple, Eve can easily, 
 and trivially generate any number of public/private key pairs and thereby 
 generate any number of viewable sets to determine the unknown private key.

Are you saying here that if p (and g) are static, then one has some 
opportunity to brute-force guess the private key that some long-running 
instance is using, if it doesn't otherwise re-allocate said private key from 
time to time?


thanks again,

=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Peter Gutmann
' =JeffH ' [EMAIL PROTECTED]
[EMAIL PROTECTED] said:
 I'm going to approach the answer somewhat differently: Why are you using
this mechanism?

Are you referring to the above mentioned mechanism of arriving at the ZZ
value independently, which is implied in RFC2631?

I'm referring to the X9.42 mechanism (as used in CMS) as a whole (see below
for the reason why this is in quotes).

(btw, I am not myself designing anything at this time that uses DH, I'm
reviewing/analyzing. I am _not_ reviewing RFC2630/2631 themselves, rather it's
a (non-IETF) spec that references 2631)

Oh.  In that case you have my sympathy :-).

So by the spec you're referring to RFC2631 here?

Or are you referring to X9.42?

I'm referring to the (old) CMS RFCs.  Even the RFCs themselves don't use
proper X9.42, they were based on an old draft that floated around for awhile
and was subsequently changed and updated.  You can see this if you look at the
order of the DLP key parameters, everything else (e.g. FIPS 186) uses { p, q,
g }, while the old CMS RFCs flip the second two values to use { p, g, q }.

I think the definitive comment on this (which also talks about differences
between FIPS 186, various X9.42 drafts, and the CMS use of those drafts) is by
the former editor of X9.42, and is archived at
http://www.vpnc.org/ietf-ipsec/99.ipsec/msg02018.html.

So here, and in the snippage, are you referring to X9.42 itself, or CMS
(Cryptographic Message Syntax) ?

Specifically CMS, since X9.42 isn't necessarily what's used in CMS.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Traffic analysis reveals spy satellite details

2008-02-06 Thread Udhay Shankar N

http://www.nytimes.com/2008/02/05/science/space/05spotters.html

When the government announced last month that a top-secret spy satellite 
would, in the next few months, come falling out of the sky, American 
officials said there was little risk to people because satellites fall 
out of orbit fairly frequently and much of the planet is covered by oceans.


But they said precious little about the satellite itself.

Such information came instead from Ted Molczan, a hobbyist who tracks 
satellites from his apartment balcony in Toronto, and fellow satellite 
spotters around the world. They have grudgingly become accustomed to 
being seen as “propeller-headed geeks” who “poke their finger in the 
eye” of the government’s satellite spymasters, Mr. Molczan said, taking 
no offense. “I have a sense of humor,” he said.


Mr. Molczan, a private energy conservation consultant, is the best known 
of the satellite spotters who, needing little more than a pair of 
binoculars, a stop watch and star charts, uncover some of the deepest of 
the government’s expensive secrets and share them on the Internet.


snip
--
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Joseph Ashwood
- Original Message - 
From: ' =JeffH ' [EMAIL PROTECTED]

To: Joseph Ashwood [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com
Sent: Monday, February 04, 2008 5:18 PM
Subject: Re: questions on RFC2631 and DH key agreement



I'd scrawled:

 If a purportedly secure protocol employing a nominal DH exchange in
 order to
 establish a shared secret key between a requester and responder, 
 employs

 widely known published (on the web) fixed values for g (2) and p (a
 purportedly prime 1040 bit number) for many of it's implementations and
 runtime invocations, what are the risks its designers are assuming with
 respect to the resultant properties of ZZ?


Joseph Ashwood graciously replied:


It is assuming that the total value of the data protected by those
parameters never crosses the cost to break in the information value
lifetime.


yes.



 I suspect that many implementations will simply use the equivalent of
 whatever
 rand() function is available to get the bits for their private keys
 directly,

Very bad idea, for two reasons, the rand() function does not have 
sufficient

internal state, and the rand() function is far from cryptographically
strong.


what about just using bytes from /dev/urandom on *nix?


*nix /dev/urandom should work well, the entropy harvesting is reasonably 
good, and the mixing/generating are sufficient to keep it from being the 
weak link.





 and will likely not reallocate private keys unless the implementation 
 or
 machine are restarted. So if the random number generator has known 
 flaws,

 then
 there may be some predictability in both the public keys and in ZZ, 
 yes?


All flaws in the private key generator will show in the public key
selection, do yes.

^^
so?


Yep, my typos show I'm far from perfect. I meant so.






 Additionally there's the previously noted issue with the values of 
 static

 private keys slowly leaking.

Only if the value of p changes, if the value of p remains static, then 
the

private key doesn't leak.


Ok, I can see that from ya = g ^ xa mod p  ...  ya doesn't change if g, 
xa,

and p don't change.



A simple proof of this is simple, Eve can easily,
and trivially generate any number of public/private key pairs and thereby
generate any number of viewable sets to determine the unknown private 
key.


Are you saying here that if p (and g) are static, then one has some
opportunity to brute-force guess the private key that some long-running
instance is using, if it doesn't otherwise re-allocate said private key 
from

time to time?


Actually I'm saying that if p and g do not change, then there is no 
additional leakage of the private key beyond what Eve can compute anyway.


There are many, many factors involved in any deep security examination, 
making it truly a business decision with all the complexities involved in 
that, and very easy to get wrong.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread ' =JeffH '

[EMAIL PROTECTED] said:
 *nix /dev/urandom should work well, the entropy harvesting is reasonably
 good, and the mixing/generating are sufficient to keep it from being the
 weak link. 

yeah, that's the way it sounds from the man page (on linux). thx. 


 Actually I'm saying that if p and g do not change, then there is no
 additional leakage of the private key beyond what Eve can compute anyway. 

ok, gotcha.

thanks again,

=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-06 Thread Frank Siebenlist

Peter Gutmann wrote:

Frank Siebenlist [EMAIL PROTECTED] writes:


That's actually a sad observation.

I keep telling my colleagues that this technology is coming any day now to
a browser near you - didn't realize that that there was no interest with the
browser companies to add support for this...


I know of a number of organisations (mostly governmental, but also some
financial) in various countries who are really, really keen to get support for
(as James Donald pointed out) cryptographically secured relationships (not
requiring PKI would be a big feature) into browsers, but no-one knows who to
beat over the head about it.  The last group I talked to (banks) were hoping
to use commercial pressure to get MS to add support for it in IE7^H^H8 at
which point Firefox would be forced to follow, but it's a slow process.



With the big browser war still going strong, wouldn't that provide 
fantastic marketing opportunities for Firefox?


If Firefox would support these secure password protocols, and the banks 
would openly recommend their customers to use Firefox because its safer 
and protects them better from phishing, that would be great publicity 
for Firefox, draw more users, and force M$ to support it too in the long 
run...




Why do the browser companies not care?
What is the adoption issue?
Still the dark cloud of patents looming over it?
Not enough understanding about the benefits? (marketing)
Economic reasons that we wouldn't buy anymore server certs?


I think it's a combination of two factors:

1. Everyone knows that passwords are insecure, so it's not worth trying to do
   anything with them.

   (My counter-argument to this is that passwords are only insecure because
   protocol designers have chosen to make them insecure, see my previous post
   about the quaint 1970s-vintage hand-over-the-password model used by SSH and
   SSL/TLS).



...these protocol would even make the use of one-time-passwords more 
secure (no MITM exposure - phishing), and make them securely usable 
without any server-certs...




2. If you add failsafe authentication to browsers, CAs become redundant.

   (My counter-argument to this is to ask whether browser security exists in
   order to provide a business model for CAs or to protect users.  Currently
   it seems to be the former, with EV certs being a prime example).



I was afraid that this cynical argument would play a role... so the 
server-cert racketeering scheme has just been made more profitable 
through more expensive but equally trustworthy EV-certs, which makes 
it more difficult to introduce alternatives that don't fit into this 
business model...


On the other hand, I'm sure that the marketeers will be able to sell 
server-certs together with those secure passwords protocols to the naive 
customers as it will be very difficult to explain why you do/don't need 
the certs and why it would more/less secure...


-Frank.

--
Frank Siebenlist   [EMAIL PROTECTED]
The Globus Alliance - Argonne National Laboratory

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-06 Thread Anne Lynn Wheeler

a recent reference

Research unmasks anonymity networks
http://www.techworld.com/security/news/index.cfm?newsID=11295
Research unmasks anonymity networks
http://www.networkworld.com/news/2008/020108-research-unmasks-anonymity.html
Research unmasks anonymity networks
http://www.arnnet.com.au/index.php/id;1270745171;fp;4194304;fpid;1
Paper Outlines Methods for Beating Anonymity Technology
http://www.darkreading.com/document.asp?doc_id=144606

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Steven M. Bellovin
On Mon, 4 Feb 2008 09:33:37 -0500 (EST)
Leichter, Jerry [EMAIL PROTECTED] wrote:

 The NSA quote someone - Steve Bellovin? - has repeated comes to mind:
 Amateurs talk about algorithms.  Professionals talk about economics.
 Using DTLS for VOIP provides you with an extremely high level of
 security, but costs you 50% packet overhead.  Is that worth it to you?
 It really depends - and making an intelligent choice requires that
 various alternatives along the cost/safety curve actually be
 available.

Precisely.

Some years ago, I did a crypto design for a potential product.  As best
we could figure it, the extra overhead for a standard mechanism versus
a custom one was greater than the profit margin for this product.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: TLS-SRP TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-06 Thread Victor Duchovni
On Wed, Feb 06, 2008 at 09:21:47AM -0800, Frank Siebenlist wrote:

 With the big browser war still going strong, wouldn't that provide 
 fantastic marketing opportunities for Firefox?
 
 If Firefox would support these secure password protocols, and the banks 
 would openly recommend their customers to use Firefox because its safer 
 and protects them better from phishing, that would be great publicity 
 for Firefox, draw more users, and force M$ to support it too in the long 
 run...

It is a bit early. OpenSSL 0.9.9 is not yet released. I wish OpenSSL
releases were more frequent, and each added fewer features, allowing
features to be released as they mature, this would also reduce pressure
to add features to stable releases (which occasionally break binary
compatibility, and lead to vendors back-porting fixes rather than deploying
the next patch level of the stable release).

While Firefox should ideally be developing and testing PSK now, without
stable libraries to use in servers and browsers, we can't yet expect
anything to be released.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread ' =JeffH '
Thanks Hal. 

It turns out the supplied default for p is 1024 bit -- I'd previously goofed 
when using wc on it..

DCF93A0B883972EC0E19989AC5A2CE310E1D37717E8D9571BB7623731866E61EF75A2E27898B057
F9891C2E27A639C3F29B60814581CD3B2CA3986D2683705577D45C2E7E52DC81C7A171876E5CEA7
4B1448BFDFAF18828EFD2519F14E45E3826634AF1949E5B535CC829A483B8A76223E5D490A257F0
5BDFF16F2FB22C583AB


=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Hal Finney
Jeff Hodges writes:
 If a purportedly secure protocol employing a nominal DH exchange in order 
 to 
 establish a shared secret key between a requester and responder, employs 
 widely known published (on the web) fixed values for g (2) and p (a 
 purportedly prime 1040 bit number) for many of it's implementations and 
 runtime invocations, what are the risks its designers are assuming with 
 respect to the resultant properties of ZZ?

This can be reasonably safe, if p is chosen properly. There is no problem
with using g=2, with the right p. The main issue is that with current
technology, a 1040 bit p stands a substantial chance of being broken.
A 1024 bit special-form number was factored last year, with claims that
the technique might apply to general RSA moduli of that size. Finding
discrete logs takes similar work.  A widely reused p value would be a
fat target for such an effort.

 I suspect that many implementations will simply use the equivalent of 
 whatever 
 rand() function is available to get the bits for their private keys directly, 
 and will likely not reallocate private keys unless the implementation or 
 machine are restarted. So if the random number generator has known flaws, 
 then 
 there may be some predictability in both the public keys and in ZZ, yes? 
 Additionally there's the previously noted issue with the values of static 
 private keys slowly leaking.

I'm not sure about this leaking, I asked Ashwood for clarification.
Certainly if the secret exponents are poorly chosen, the system will be
insecure. I would not necessarily assume that rand() is being used; I
would hope in this day and age that people would know better than that.
/dev/random on LinuxMac and CryptGenRandom on Windows should provide
adequate security for this use, and hopefully the implementors would be
aware of the need for secure random numbers.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-06 Thread Hal Finney
Joseph Ashwood writes, regarding unauthenticated DH:
 I would actually recommend sending all the public data. This does not take 
 significant additional space and allows more verification to be performed. I 
 would also suggest looking at what exactly the goal is. As written this 
 provides no authentication just privacy, and if b uses the same private key 
 to generate multiple yb the value of b will slowly leak.

I'm not familiar with this last claim, that the value of b's private key
(presuming that is what you mean) would slowly leak if it were reused for
many DH exchanges. Can you explain what you mean? Are you talking about
LimLee style attacks where the recipient does not check the parameters
for validity? In that case I would say the private exponent would leak
quickly rather than slowly. But if the parameters are checked, I don't
see how that would leak a reused exponent.

 You can then use the gpb trio for DSA, leveraging the key set for more 
 capabilities.

Presuming here you mean (g,p,q) as suitable for reuse. This raises the
question, is the same set of (g,p,q) parameters suitable for use in both
DH exchange and DSA signatures?

From the security engineering perspective, I'd suggest that the goals and
threat models for encryption vs signatures are different enough that one
would prefer different parameters for the two. For DSA signatures, we'd
like small subgroups, since the subgroup size determines the signature
size. This constraint is not present with DH encryption, where a large
subgroup will work as well as a small one. Large subgroups can then
support larger private exponents in the DH exchange.

Now it may be argued that large subgroups do not actually increase
security in the DH exchange, because index calculus methods are
independent of subgroup size. In fact, parameters for DSA signatures
are typically chosen so that subgroup based methods such as Shanks that
take sqrt(q) cost are balanced against estimates of index calculus
work to break p. However, this balancing is inherently uncertain and
it's possible that p-based attacks will turn out to be harder than ones
based on q. Hence one would prefer to use a larger q to provide a margin
of safety if the costs are not too high. While there is a computational
cost to using a larger subgroup for DH exchange, there is no data cost,
while for DSA there are both computational and data costs. Therefore the
tradeoffs for DH would tend to be different than for DSA, and a larger
q would be preferred for DH, all else equal. In fact it is rather common
in DH parameter sets to use Sophie-Germain primes for q.

We may also consider that breaking encryption keys is a passive
attack which can be mounted over a larger period of time (potentially
providing useful information even years after the keys were retired)
and is largely undetectable; while breaking signatures, to be useful,
must be performed actively, carries risks of detection, and must be
completed within a limited time frame. All these considerations motivate
using larger parameter sets for DH encryption than for DSA signatures.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Poor password management may have led to bank meltdown

2008-02-06 Thread Jon Callas


On Feb 4, 2008, at 1:55 PM, Arshad Noor wrote:


Do business people get it?  Do security professionals get it?
Apparently not.

Arshad Noor
StrongAuth, Inc.

Huge losses reported by Société Générale were apparently enabled
by forgotten low-level IT chores such as password management.

http://www.infoworld.com/article/08/02/04/Poor-password-management-may-have-led-to-bank-meltdown_1.html


Yes, but get what? It is a vague noun.

The reporter showed some wit by using the word may.

This was an attack by an evil (or crazy) insider. Evil insider attacks  
are the hardest to protect against. If the insider decided that he was  
going to start making trades for whatever reason, then he'd find a  
weak point that would allow him to make trades, and use it, no matter  
what it is. (My personal hypothesis is a variant of a mad-scientist  
attacker -- They laughed at me when I told them my trading theories!  
Laughed! But I'll show them! I'll show them ALL!!!)


If this person had worked for 1000 hours to get a hardware token, he  
would have just done the work. The result may have been an order of  
magnitude more. High-security procedures tend to be more brittle for  
psychological reasons. If you have the magic dingus, then you are  
authorized, and no one ever questions the dingus.


Also, one must look at the economics and psychology of the situation.  
Traders are prima-donna adrenaline junkies who trade vast sums of  
money all the time and are not shy about expressing their  
frustrations. Looking at the sheer economics first:


* A trader trades C units of currency every hour, with an average  
profit of P (for example 5% profit is P=1.05).


* There are T traders in the organization.

* The extra authentication produces a productivity drop of D. For  
example, let us suppose a trader has to authenticate once per hour,  
and it takes 10 seconds to authenticate. This gives us a D of .9972 or  
3590/3600.


So the operational cost of your authentication is (1-D)*T*C*P per  
hour. Divide €4.9G by that, and you get the number of hours for the  
raw break-even time on this.


Add to this the probability that the hassle will convince a trader to  
jump ship to another firm (J), times the number hours of trading lost  
until you find a replacement (H). We'll assume the replacement needs  
no spinup time to become as productive as the previous trader. That's  
an additional cost of J*H*T*C*P. This is the psychological factor. As  
I said, traders are prima donnas who are used to getting their own way.


People have criticized post-9/11 airline security on similar grounds.  
They observe that some number of people drive rather than fly, and  
calculate out the difference in deaths-per-passenger-mile. I've seen  
numbers that work out to a handful of 9/11s per year caused by traffic  
displacement. They also observe that large numbers of people spend  
extra time in lines, which works out to a lost life number. For  
example, if you assume that passengers spend 10 extra minutes clearing  
security and a life is 70 years, then roughly 6 million passengers  
represents one lost life.


There's always much to criticize in these models. I could write a  
reply to this message with criticisms, and so can you. Nonetheless,  
the models show that there's more than just the raw security to think  
about.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Eric Rescorla
At Mon, 04 Feb 2008 14:29:50 +1000,
James A. Donald wrote:
 
 James A. Donald wrote:
   I have figured out a solution, which I may post here
   if you are interested.
 
 Ian G wrote:
   I'm interested.  FTR, zooko and I worked on part of
   the problem, documented briefly here:
   http://www.webfunds.org/guide/sdp/index.html
 
 I have posted How to do VPNs right at
 http://jim.com/security/how_to_do_VPNs.html
 
 It covers somewhat different ground to that which your
 page covers, focusing primarily on the problem of
 establishing the connection.
 
   humans are not going to carry around large
   strong secrets every time either end of the
   connection restarts.  In fact they are not going
   to transport large strong secrets any time ever,
   which is the flaw in SSL and its successors such
   as IPSec and DTLS

This paragraph sure is confused.

1. IPsec most certainly is not a successor to SSL. On
   the contrary, IPsec predates SSL.

2. TLS doesn't require you to carry around strong secrets.
   I refer you to TLS-SRP [RFC 5054]

3. For that matter, even if you ignore SRP, TLS supports
   usage models which never require you to carry around
   strong secrets: you preconfigure the server's public
   key and send a password over the TLS channel. Since
   this is the interface SSH uses, the claim that humans
   won't do it is manifestly untrue.


-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Poor password management may have led to bank meltdown

2008-02-06 Thread Arshad Noor

It is a number of things that I will elucidate, Jon; but it is
definitely not raw security.

It is:

* a recognition that a company in business using other people's
  money has a fiduciary responsibility for managing it with prudence;
* an awareness that computerized trading has the potential to
  dramatically reduce visibility of those who have the responsibility
  to protect shareholders' and customers' assets;
* an understanding that computers and networks are far less safe than
  they were 30 years ago when they operated from glass houses;
* knowledge of the debacles at LTCM, Enron, Global Crossing, Barings,
  Adelphia, etc. and how a lack of controls destroyed so many human
  lives (literally, financially and psychologically);
* an appreciation that a failure of controls designed to protect
  financial markets can lead to losses of confidence, market-runs,
  depressions and potentially, social upheaval;
* an acknowledgment that while it is impossible to stop a determined
  rogue trader, trading systems can be easily programmed to trigger
  alerts to higher and higher levels of management as trades exceed
  preset limits, so they may exercise over-riding controls on the
  trades if needed;

This is it; if business people truly got this, we wouldn't see what
we're seeing in the marketplace today.

You have defined some very clever formulae showing the opportunity
cost of using too much security and would have us believe that
decision-makers at such companies actually do something like this when
making decisions on how much risk-mitigation to put in place.

If they were endowed with so much intelligence, I would argue that they
might also have calculated the probability of a rogue within the ranks,
the probability of losses resulting from rogue-trades, the probability
of a loss of confidence in the company, the resulting opportunity cost
of lost business,  the increased cost of implementing new controls
across the industry (and the opportunity cost of those investments),
the resulting opportunity cost of lost economic value as people pull
back from financial markets, the resulting opportunity cost of
legitimate companies being unable to raise capital in markets to invent
that new life-saving drug or the new carbon-free energy source or  
you get the picture.


I would, but I won't because you and I know they do nothing like this
when making these security decisions.  It is mostly a gut feeling,
made-up ROI numbers that are mostly meaningless, what the rest of the
lemmings are doing in the industry, what the press is screaming about
this year and who just got burned and for what.

One hopes that as society evolves, with better levels of education,
better tools, technologies and standards of living, we would recognize
the need to invest ounces of prevention to avoid the pounds of cure.
Sadly, I find that the Las Vegas mentality has permeated businesses
to the point that we're taking bigger and bigger risks without really
doing the analysis - going on just gut feel - resulting in situations
like at Societe' Generale.

Arshad Noor
StrongAuth, Inc.


Jon Callas wrote:


On Feb 4, 2008, at 1:55 PM, Arshad Noor wrote:


Do business people get it?  Do security professionals get it?
Apparently not.

Arshad Noor
StrongAuth, Inc.

Huge losses reported by Société Générale were apparently enabled
by forgotten low-level IT chores such as password management.

http://www.infoworld.com/article/08/02/04/Poor-password-management-may-have-led-to-bank-meltdown_1.html 



Yes, but get what? It is a vague noun.

The reporter showed some wit by using the word may.

This was an attack by an evil (or crazy) insider. Evil insider attacks 
are the hardest to protect against. If the insider decided that he was 
going to start making trades for whatever reason, then he'd find a weak 
point that would allow him to make trades, and use it, no matter what it 
is. (My personal hypothesis is a variant of a mad-scientist attacker -- 
They laughed at me when I told them my trading theories! Laughed! But 
I'll show them! I'll show them ALL!!!)


If this person had worked for 1000 hours to get a hardware token, he 
would have just done the work. The result may have been an order of 
magnitude more. High-security procedures tend to be more brittle for 
psychological reasons. If you have the magic dingus, then you are 
authorized, and no one ever questions the dingus.


Also, one must look at the economics and psychology of the situation. 
Traders are prima-donna adrenaline junkies who trade vast sums of money 
all the time and are not shy about expressing their frustrations. 
Looking at the sheer economics first:


* A trader trades C units of currency every hour, with an average profit 
of P (for example 5% profit is P=1.05).


* There are T traders in the organization.

* The extra authentication produces a productivity drop of D. For 
example, let us suppose a trader has to authenticate once per hour, and 
it takes 10 seconds to 

Re: Gutmann Soundwave Therapy

2008-02-06 Thread Bill Frantz
[EMAIL PROTECTED] (Peter Gutmann) on Monday, February 4, 2008 wrote:

Eric Rescorla [EMAIL PROTECTED] writes:

I don't propose to get into an extended debate about whether it is better to
use SRTP or to use generic DTLS. That debate has already happened in IETF and
SRTP is what the VoIP vendors are doing. However, the good news here is that
you can use DTLS to key SRTP (draft-ietf-avt-dtls-srtp), so there's no need
to invent a new key management scheme.

Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as well as
OpenVPN using ESP with TLS keying), I wonder if it's worth unbundling the key
exchange from the transport?  At the moment there's (at least):

  TLS-keying --+-- TLS transport
   |
   +-- DTLS transport
   |
   +-- IPsec (ESP) transport
   |
   +-- SRTP transport
   |
   +-- Heck, SSH transport if you really want

Is the TLS handshake the universal impedance-matcher of secure-session
mechanisms?

If there had been a separation between the key exchange and
validation part of SSL (early TLS) and the transport part, the E
language protocol[1] almost certainly would have used the transport
part of the protocol.  The reasons at the time for not using SSL are
described in [2].  They are all associated with the connection and
cryptograph setup.

Simplified overview:

When an E program needs to contact a remote E program, it starts
with a hash of the other program's public key and large random
number, the Swiss number.  It gets the IP and port of the remote
program from a well-known network service called the Process Location
Service.  It then contacts that IP and port, sends its public key,
receives the remote public key, performs a Diffie Hellman exchange
for forward secrecy, checks the hash of the remote public key, and
sends a signature over the exchange.  It checks the remote programs
signature over the exchange, and if all the checks pass, sends the
encrypted Swiss number to identify the specific remote resource.

I couldn't see any way to take this self-authenticating key exchange
and jam it into a x.509 structure.  Perhaps I wasn't inventive
enough, but I ended up rolling my own transport protocol, at certain
extra cost in development and testing, and a significant risk of
security errors.

Cheers - Bill

[1] http://www.erights.org/elib/distrib/vattp/index.html

[2] http://www.erights.org/elib/distrib/vattp/SSLvsDataComm.html

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]