Re: How effective is open source crypto? (aads addenda)

2003-03-24 Thread Anne & Lynn Wheeler
we did something similar for AADS PPP Radius
http://www.garlic.com/~lynn/index.html#aads
AADS radius example
http://www.asuretee.com/
... with FIPS186-2, x9.62, ecdsa digital signature authentication on 
sourceforce 
http://ecdsainterface.sourceforge.net/

radius digital signature protocol has replay challenge.

so adding radius option to webserver client authentication stub 
(infrastructure can share common administration client authentication 
across all of its environments). then client clicks on https client 
authentication, generates secret random key, encrypts request for client 
authentication with random key, encrypts random key with server public key, 
sends off single transmission. Server responds with radius connect request 
 which includes replay challenge value as part of message (encrypted 
with random key). Client responds with digital signature on the server 
radius message (and some of its own data, encrypted with random key).

Basically use the same packet sequence as in transaction w/o replay 
challenge ... since higher level protocol contains replay challenge.  Then 
can use same packet sequence for webserver TLS and encrypted PPP (and works 
as VPN; possibly can define also as encrypted TCP)  along with the same 
client authentication infrastructure

Infrastructure can use the same administration (RADIUS) infrastructure for 
all client authentication  say enterprise with both extranet 
connections as well as webserver  or ISP that also supplies webhosting. 
The same administrative operation can be used to support client 
authentication at the PPP level as well as at the webserver level.

The same packet exchange sequence is used for both PPP level encryption 
with client authentication as well as TLS for webserver level encryption 
with client authentication.

The higher level application can decide whether it already has sufficient 
replay/repeat resistance or request replay/repeat resistance from lower 
level protocol.

So regardless of TLS, PPP, or TCP, client authentication (using same packet 
sequence as transaction, w/o lower level replay challenge):

1) client picks up server public key and encryption options (from cache or DNS)

2) client sends of radius client authentication, encrypted with random 
secret key, encrypted with server public key ...

3) server lower level protocol handles the decryption of the random secret 
key and the decryption of the client request (which happens to be radius 
client authentication  but could be any other kind of transaction 
request) and passes up the decrypted client request

4) server higher level protocol (radius client authentication) responds 
with radius replay challenge

5) client gets the replay challenge, adds some stuff, digitally signs it 
and responds

6) server higher level radius client authentication protocol appropriately 
processes

Same server public key initial connect code works at TLS, PPP, and possibly 
TCP protocol levels. The same server public key initial connect code 
supports both lower-level replay challenge and no replay challenge.

Same radius client authentication works at TLS, PPP, and possibly TCP 
protocol levels. Same client administrative processes works across the 
whole environment.

aka  the radius client authentication protocol is just another example 
(like the purchasse order example) of the higher level protocol having its 
own replay/repeat handling infrastructure (whether it is something like log 
checking or its own replay challenge).

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto? (bad form)

2003-03-24 Thread Eric Rescorla
Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> The difference is basic two packet exchange (within setup/teardown
> packet exchange overhead) plus an additional replay prevention two
> packet exchange (if the higher level protocol doesn't have its own
> repeat handling protocol). The decision as to whether it is two packet
> exchange or four packet exchange is not made by client ... nor the
> server ... but by the server application.
You've already missed the point. SSL/TLS is a generic security
protocol. As such, the idea is to push all the security into the
protocol layer where possible. Since, as I noted, the performance
improvement achieved by not doing so is minimal, it's better to just
have replay protection here.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto? (bad form)

2003-03-24 Thread Anne & Lynn Wheeler
At 09:30 AM 3/16/2003 -0800, Eric Rescorla wrote:

Correct.

It's considered bad form to design systems which have known replay
attacks when it's just as easy to design systems which don't.
If there were some overriding reason why it was impractical
to mount a defense, then it might be worth living with a replay
attack. However, since it would have only a very minimal effect
on offered load to the network and--in most cases--only a marginal
effect on latency, it's not worth doing.
-Ekr

--
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/
so, lets look at the alternatives for servers that are worried about server 
replay attacks:

client has public key & crypto-preferred info (dns or cached), generates 
random secret key, encrypts request, encrypts random secret key, single 
transmission

server gets request ... application has opened the connection with or w/o 
server replay attack. if the application, higher level protocol has its own 
repeat checking  it has opened the connection w/o server replay attack. 
and the server sends the request up the stack to the application. If the 
application has opened the connection with server replay attack, the 
protocol sends back some random data (aka its own secret)... that happens 
to be encrypted with the random key.

The client is expecting either the actual response or the replay attack 
check. If the client gets the actual response, everything is done. If the 
clients gets back the replay attack check  it combines it with 
something  and returns to the server.

The difference is basic two packet exchange (within setup/teardown packet 
exchange overhead) plus an additional replay prevention two packet exchange 
(if the higher level protocol doesn't have its own repeat handling 
protocol). The decision as to whether it is two packet exchange or four 
packet exchange is not made by client ... nor the server ... but by the 
server application.

Simple example for e-commerce is sending a P.O. along with payment 
authorization ... the transmitted P.O. form is guaranteed to have a unique 
identifier. The P.O. processing system has logic for handling repeat POs 
... for numerous reasons (not limited to replay attacks).

Single round-trip transaction:

ClientHello/Trans->
<- ServerResponse/Finish
Transaction w/replay challenge:

ClientHello/Trans->
<-Server replay challenge
ClientResp->
<-ServerResponse/Finish
Now, ClientHello/Trans can indicate whether the client is expecting a 
single round-trip or additional data.

Also, the ServerResponse can indicate whether it is a piggy-backed finish 
or not.

So, the vulnerability analysis is what is the object of the replay attack 
and what needs to be protected. I would contend that the object of the 
replay attack isn't directly the protocol, server, or the system  but 
the specific server application. Problem of course, is that with generic 
webserver (making the connection) there might be a couple levels of 
indirection between the webserver specifying the connection parameters and 
the actual server application (leading to webservers always specifying 
replay challenge option).
--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
  
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto? (addenda)

2003-03-24 Thread Anne & Lynn Wheeler
... small side-note  part of the x9.59 work for all payments in all 
environments  was that the transaction system needed to be resilient to 
repeats and be done in a single round-trip (as opposed to the transport).

there needed to be transaction resiliency with respect to single round trip 
with something like email that might not happen in strictly real-time 
(extremely long round-trip delays).

Real-world systems have been known to have glitches ... order/transaction 
generation that accidentally repeats (regardless of whether or not 
transport is catching replay attacks).
--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto?

2003-03-24 Thread Eric Rescorla
Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> At 08:40 AM 3/16/2003 -0800, Eric Rescorla wrote:
> 
> Sorry, there were two pieces being discussed.
>
> The part about SSL being a burden/load on servers 
> 
> and the shorten SSL description taken from another discussion.
This wasn't clear from your message.

> The
> shorten SSL description was (in fact) from a discussion of the
> round-trips and latency ... not particularly burden on the server. In
> the original discussion there was mention about HTTP requires TCP
> setup/teardown which is minimum seven packet exchange  
TCP setup is 3 packets. The teardown doesn't have any effect whatsoever
on the performance of the system (and often isn't done anyway).
It's a very modest load on the network and one which is far
outstripped by the traffic sent by SSL and HTTP.

> So what kind of replay attack is there. Looking at purely e-commerce
> ... there is no client authentication. Also, since the client always
> chooses a new, random key  there is no replay attack on the client
> ... since the client always sends something new (random key) every
> time. That just leaves replay attacks on the server (repeatedly
> sending the same encrypted data).
Correct.

It's considered bad form to design systems which have known replay
attacks when it's just as easy to design systems which don't.
If there were some overriding reason why it was impractical
to mount a defense, then it might be worth living with a replay
attack. However, since it would have only a very minimal effect
on offered load to the network and--in most cases--only a marginal
effect on latency, it's not worth doing.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto?

2003-03-24 Thread Anne & Lynn Wheeler
At 08:40 AM 3/16/2003 -0800, Eric Rescorla wrote:
You still need a round trip in order to prevent replay attacks. The
fastest that things can be while still preserving the security
properties of TLS is:
ClientHello   ->
ClientKeyExchange ->
Finished  ->
  <-  ServerHello
  <-  Finished
Data  ->
See Boneh and Schacham's "Fast-Track SSL" paper in Proc.ISOC NDSS 2002
for a description of a scheme where the client caches the server's
parameters for future use, which is essentially isomorphic
to having the keys in the DNS as far as the SSL portion goes.
In any case, the optimization you describe provides almost no
performance improvement for the server because the load on the server
derives almost entirely from the cryptography, not from transmitting
the ServerHello [0]. What it does is provide reduced latency,
but this is only of interest to the client, not the server,
and really only matters on very constrained links.
-Ekr

[0] With the exception of the ephemeral modes, but they're simply
impossible in the scheme you describe.
Sorry, there were two pieces being discussed.

The part about SSL being a burden/load on servers 

and the shorten SSL description taken from another discussion. The shorten 
SSL description was (in fact) from a discussion of the round-trips and 
latency ... not particularly burden on the server. In the original 
discussion there was mention about HTTP requires TCP setup/teardown which 
is minimum seven packet exchange  and any HTTPS chatter is in addition 
to that. VMTP, from rfc1045 is minimum five packet exchange, and XTP is 
minimum three packet exchange. A cached/dns SSL is still minimum seven 
packet exchange done over TCP (although XTP would reduce that to three 
packet exchange).

So what kind of replay attack is there. Looking at purely e-commerce ... 
there is no client authentication. Also, since the client always chooses a 
new, random key  there is no replay attack on the client ... since the 
client always sends something new (random key) every time. That just leaves 
replay attacks on the server (repeatedly sending the same encrypted data).

As follow-up to doing the original e-commerce stuff ... we then went on to 
look at existing vulnerabilities and solutions  and (at least) the 
payment system has other methods already in place with regard to getting 
duplicate transaction  aka standards body for all payments (credit, 
debit, stored-value, etc) in all (electronic) environments (internet, 
point-of-sale, self-serve, face-to-face, etc), X9.59
http://www.garlic.com/~lynn/index.html#x959 (standard)
http://www.garlic.com/~lynn/index.html#aadsnacha (debit/atm network pilot)

Replay for simple information retrieval isn't particularly serious except 
as DOS  but serious DOS can be done whether flooding is done with 
encrypted packets or non-encrypted packets. Another replay attack is 
transaction based ... where each transaction represents something like 
performing real world transaction (send a shirt and debit account). If it 
actually involves payment ... the payment infrastructure has provisions in 
place to handle repeat/replay and will reject. So primarily what is left 
 are simple transaction oriented infrastructures that don't have their 
own mechanism for detecting replay/repeats and are relying on SSL.

I would also contend that this is significantly smaller exposure than 
self-signed certificates.



--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto?

2003-03-24 Thread Eric Rescorla
Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> There is a description of doing an SSL transaction in single round
> trip. The browser contacts the domain name system and gets back in
> single transmission the 1) public key, 2) preferred server SSL
> parameters, 3) ip-address. The browser selects the SSL parameters,
> generates a random secret key, encrypts the HTTP request with the
> random secret key, encrypts the random secret key with the public key
> ... and sends off the whole thing in a single transmission
>  eliminating all of the SSL protocol back&forth setup chatter.
You still need a round trip in order to prevent replay attacks. The
fastest that things can be while still preserving the security
properties of TLS is:

ClientHello   -> 
ClientKeyExchange ->
Finished  ->
  <-  ServerHello
  <-  Finished
Data  ->

See Boneh and Schacham's "Fast-Track SSL" paper in Proc.ISOC NDSS 2002
for a description of a scheme where the client caches the server's
parameters for future use, which is essentially isomorphic
to having the keys in the DNS as far as the SSL portion goes.

In any case, the optimization you describe provides almost no
performance improvement for the server because the load on the server
derives almost entirely from the cryptography, not from transmitting
the ServerHello [0]. What it does is provide reduced latency,
but this is only of interest to the client, not the server,
and really only matters on very constrained links.

-Ekr

[0] With the exception of the ephemeral modes, but they're simply
impossible in the scheme you describe.


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: How effective is open source crypto?

2003-03-16 Thread Anne & Lynn Wheeler
having worked on some of the early e-commerce/certificate stuff ... recent ref:
http://www.garlic.com/~lynn/aadsm13.htm#25 Certificate Policies (addenda)
the assertion is that basic ssl domain name certificate is so that the 
browser can check the domain name from the url typed in against the domain 
name from the presented (trusted) certificate ... and have some confidence 
that the browser is really talking to the server that it thinks it is 
talking to (based on some trust in the issuing certification authority). in 
that context ... self-certification is somewhat superfluous ... if you 
trust the site to be who they claim to be ... then you shouldn't even have 
to bother to check. that eliminates having to have a certificate at all ... 
just transmit a public key

so slight step up from MITM-attacks with self-signed certificates would be 
to register your public key at the same time you register the domain. 
browsers get the server's public key from dns at the same time it gets the 
ip-address (dns already supports binding of generalized information to 
domain ... more than simple ip-address). this is my long, repetitive 
argument about ssl domain name certification 
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

i believe a lot of the non-commercial sites have forgone SSL certificates 
 because of the cost and bother.

some number of the commercial sites that utilize SSL certificates  only 
do it as part of financial transaction (and lots of them  when it is 
time to "check-out"  actually transfer to a 3rd party service site that 
specializes in SSL encruyption and payments). The claim by many for some 
time  is that given the same exact hardware  they can do 5-6 times 
as many non-SSL (non-encrypted) HTTP transactions as they can do SSL 
(encrypted) HTTPS transactions  aka they claim 80 to 90 percent hit to 
the number of transactions that can be done switching from HTTP to HTTPS.

a short version of the SSL server domain name certificate is worry about 
attacks on the domain name infrastructure that can route somebody to a 
different server. so SSL certificate is checked against to see if the 
browser is likely talking to the server they think they are talking to. the 
problem is that if somebody applies for a SSL server domain name 
certificate  the CA (certification authority) has to check with the 
authoritative agency for domain names  to validate the applicants 
domain name ownership. The authoritative agency for domain names is the 
domain name infrastructure that has all the integrity concerns giving rise 
for the need for SSL domain name certificates. So there is a proposal for 
improving the integrity of the domain name infrastructure (in part backed 
by the CA industry ... since the CA industry is dependent on the integrity 
of the domain name infrastructure for the integrity of the certificate of 
the certificates) which includes somebody registering a public key at the 
same time at a domain name. So we are in catch-22 

1) improving the overall integrity of the domain name infrastructure 
mitigates a lot of the justification for having SSL domain name 
certificates (sort of a catch-22 for the CA industry).

2) registering a public key at the same time as domain name infrastructure 
... implies that the public key can be served up from the domain name 
infrastructure (at the same time as the ip-address  eliminating all 
need for certificates).

There is a description of doing an SSL transaction in single round trip. 
The browser contacts the domain name system and gets back in single 
transmission the 1) public key, 2) preferred server SSL parameters, 3) 
ip-address. The browser selects the SSL parameters, generates a random 
secret key, encrypts the HTTP request with the random secret key, encrypts 
the random secret key with the public key ... and sends off the whole thing 
in a single transmission  eliminating all of the SSL protocol 
back&forth setup chatter. The browser had to contact the domain name system 
in any case to get the ip-address  the change allows the browser to get 
back the rest of the information in the same transmission.



--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]