ACH fraud

2008-09-01 Thread Perry E. Metzger

Several people have sent in a link to a New York Times story on ACH fraud:

http://www.nytimes.com/2008/08/30/business/yourmoney/30theft.html

Perry
-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Ben Laurie
[Adding the cryptography list, since this seems of interest]

On Wed, Aug 27, 2008 at 8:58 PM, Story Henry [EMAIL PROTECTED] wrote:
 Apparently rfc2817 allows an http url tp be used for https security.

 Given that Apache seems to have that implemented [1] and that the
 openid url is mostly used for server to server communication, would
 this be a way out of the http/https problem?

 I know that none of the browsers support it, but I suppose that if the
 client does not support this protocol, the server can redirect to the
 https url? This seems like it could be easier to implement that XRI .

 Disclaimer: I don't know much about rfc2817

This inspired a blog post: http://www.links.org/?p=382.

Recent events, and a post to the OpenID list got me thinking.

blockquote
Apparently rfc2817 allows an http url tp be used for https security.

Given that Apache seems to have that implemented [1] and that the
openid url is mostly used for server to server communication, would
this be a way out of the http/https problem?

I know that none of the browsers support it, but I suppose that if the
client does not support this protocol, the server can redirect to the
https url? This seems like it could be easier to implement that XRI .

Disclaimer: I don't know much about rfc2817

Henry


[1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00251.html
/blockquote

The core issue is that HTTPS is used to establish end-to-end security,
meaning, in particular, authentication and secrecy. If the MitM can
disable the upgrade to HTTPS then he defeats this aim. The fact that
the server declines to serve an HTTP page is irrelevant: it is the
phisher that will be serving the HTTP page, and he will have no such
compunction.

The traditional fix is to have the client require HTTPS, which the
MitM is powerless to interfere with. Upgrades would work fine if the
HTTPS protocol said connect on port 80, ask for an upgrade, and if
you don't get it, fail, however as it is upgrades work at the behest
of the server. And therefore don't work.

Of course, the client requires HTTPS because there was a link that
had a scheme of https. But why did was that link followed? Because
there was an earlier page with a trusted link (we hope) that was
followed. (Note that this argument applies to both users clicking
links and OpenID servers following metadata).

If that page was served over HTTP, then we are screwed, obviously
(bearing in mind DNS cache attacks and weak PRNGs).

This leads to the inescapable conclusion that we should serve
everything over HTTPS (or other secure channels).

Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
Even really serious modern processors can only handle a few thousand
new SSL sessions per second. New plaintext sessions can be dealt with
in their tens of thousands.

Perhaps we should focus on this problem: we need cheap end-to-end
encryption. HTTPS solves this problem partially through session
caching, but it can't easily be shared across protocols, and sessions
typically last on the order of five minutes, an insanely conservative
figure.

What we need is something like HTTPS, shareable across protocols, with
caches that last at least hours, maybe days. And, for sites we have a
particular affinity with, an SSH-like pairing protocol (with less
public key crypto - i.e. more session sharing).

Having rehearsed this discussion many times, I know the next objection
will be DoS on the servers: a bad guy can require the server to spend
its life doing PK operations by pretending he has never connected
before. Fine, relegate PK operations to the slow queue. Regular users
will not be inconvenienced: they already have a session key.
Legitimate new users will have to wait a little longer for initial
load. Oh well.


 Henry


 [1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00251.html


 http://www.ietf.org/rfc/rfc2817.txt
 Home page: http://bblfish.net/

 ___
 general mailing list
 [EMAIL PROTECTED]
 http://openid.net/mailman/listinfo/general


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Eric Rescorla
At Mon, 1 Sep 2008 21:00:55 +0100,
Ben Laurie wrote:
 The core issue is that HTTPS is used to establish end-to-end security,
 meaning, in particular, authentication and secrecy. If the MitM can
 disable the upgrade to HTTPS then he defeats this aim. The fact that
 the server declines to serve an HTTP page is irrelevant: it is the
 phisher that will be serving the HTTP page, and he will have no such
 compunction.

 The traditional fix is to have the client require HTTPS, which the
 MitM is powerless to interfere with. Upgrades would work fine if the
 HTTPS protocol said connect on port 80, ask for an upgrade, and if
 you don't get it, fail, however as it is upgrades work at the behest
 of the server. And therefore don't work.

Even without an active attacker, this is a problem if there is
sensitive information in the request, since that will generally
be transmitted prior to discovering the server can upgrade.


 Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
 Even really serious modern processors can only handle a few thousand
 new SSL sessions per second. New plaintext sessions can be dealt with
 in their tens of thousands.
 
 Perhaps we should focus on this problem: we need cheap end-to-end
 encryption. HTTPS solves this problem partially through session
 caching, but it can't easily be shared across protocols, and sessions
 typically last on the order of five minutes, an insanely conservative
 figure.

Session caches are often dialed this low, but it's not really necessary
in most applications. First, a session cache entry isn't really that
big. It easily fits into 100 bytes on the server, so you can serve
a million concurrent user for a measly 100M. Second, you can use
CSSC/Tickets [RFC5077] to offload all the information onto the client.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Ben Laurie
On Mon, Sep 1, 2008 at 9:49 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
 At Mon, 1 Sep 2008 21:00:55 +0100,
 Ben Laurie wrote:
 The core issue is that HTTPS is used to establish end-to-end security,
 meaning, in particular, authentication and secrecy. If the MitM can
 disable the upgrade to HTTPS then he defeats this aim. The fact that
 the server declines to serve an HTTP page is irrelevant: it is the
 phisher that will be serving the HTTP page, and he will have no such
 compunction.

 The traditional fix is to have the client require HTTPS, which the
 MitM is powerless to interfere with. Upgrades would work fine if the
 HTTPS protocol said connect on port 80, ask for an upgrade, and if
 you don't get it, fail, however as it is upgrades work at the behest
 of the server. And therefore don't work.

 Even without an active attacker, this is a problem if there is
 sensitive information in the request, since that will generally
 be transmitted prior to discovering the server can upgrade.

Obviously we can fix this at the protocol level.

 Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
 Even really serious modern processors can only handle a few thousand
 new SSL sessions per second. New plaintext sessions can be dealt with
 in their tens of thousands.

 Perhaps we should focus on this problem: we need cheap end-to-end
 encryption. HTTPS solves this problem partially through session
 caching, but it can't easily be shared across protocols, and sessions
 typically last on the order of five minutes, an insanely conservative
 figure.

 Session caches are often dialed this low, but it's not really necessary
 in most applications. First, a session cache entry isn't really that
 big. It easily fits into 100 bytes on the server, so you can serve
 a million concurrent user for a measly 100M.

But if the clients drop them after five minutes, this gets you
nowhere. BTW, sessions are only that small if there are no client
certs.

 Second, you can use
 CSSC/Tickets [RFC5077] to offload all the information onto the client.

Likewise.


 -Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Eric Rescorla
At Mon, 1 Sep 2008 21:56:52 +0100,
Ben Laurie wrote:
 
 On Mon, Sep 1, 2008 at 9:49 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
  At Mon, 1 Sep 2008 21:00:55 +0100,
  Ben Laurie wrote:
  The core issue is that HTTPS is used to establish end-to-end security,
  meaning, in particular, authentication and secrecy. If the MitM can
  disable the upgrade to HTTPS then he defeats this aim. The fact that
  the server declines to serve an HTTP page is irrelevant: it is the
  phisher that will be serving the HTTP page, and he will have no such
  compunction.
 
  The traditional fix is to have the client require HTTPS, which the
  MitM is powerless to interfere with. Upgrades would work fine if the
  HTTPS protocol said connect on port 80, ask for an upgrade, and if
  you don't get it, fail, however as it is upgrades work at the behest
  of the server. And therefore don't work.
 
  Even without an active attacker, this is a problem if there is
  sensitive information in the request, since that will generally
  be transmitted prior to discovering the server can upgrade.
 
 Obviously we can fix this at the protocol level.
 
  Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
  Even really serious modern processors can only handle a few thousand
  new SSL sessions per second. New plaintext sessions can be dealt with
  in their tens of thousands.
 
  Perhaps we should focus on this problem: we need cheap end-to-end
  encryption. HTTPS solves this problem partially through session
  caching, but it can't easily be shared across protocols, and sessions
  typically last on the order of five minutes, an insanely conservative
  figure.
 
  Session caches are often dialed this low, but it's not really necessary
  in most applications. First, a session cache entry isn't really that
  big. It easily fits into 100 bytes on the server, so you can serve
  a million concurrent user for a measly 100M.
 
 But if the clients drop them after five minutes, this gets you
 nowhere.

Agreed. I thought we were contemplating protocol changes in
any case, so I figured having clients just use a longer session
cache (5 minutes is silly for a client anyway, since the amount
of memory consumed on the client is miniscule) wasn't much
of an obstacle.


 BTW, sessions are only that small if there are no client
 certs.

True enough, though that's the common case right now.


  Second, you can use
  CSSC/Tickets [RFC5077] to offload all the information onto the client.
 
 Likewise.

Except that CSSC actually looks better when client certs are used, since
you can offload the entire cert storage to the client, so you get
more memory savings.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]