Re: SSL and Malicious Hardware/Software

2008-05-06 Thread Arcane Jill

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
On Behalf Of Steven M. Bellovin

Sent: 03 May 2008 00:51
To: Arcane Jill
Cc: cryptography@metzdowd.com
Subject: Re: SSL and Malicious Hardware/Software


  I can't think of a great way of alerting the user,

 I would be alerted immediately, because I'm using the Petname Tool
 Firefox plugin.

 For an unproxied site, I get a small green window with my own choice
 of text in it (e.g. Gmail if I'm visiting https://mail.google.com).
 If a proxy were to insert itself in the middle, that window would turn
 yellow, and the message would change to (untrusted).

Assorted user studies suggest that most users do not notice the color
of random little windows in their browsers...




The point is that the plugin does not trust the browser's list of installed 
CAs. The only thing it trusts is the fingerprint of the certificate. If the 
fingerprint is one that you, personally, (not your browser), have approved in 
the past, then the plugin is green. If not, the plugin is yellow.


Without this plugin, identifying proxies is hard, because the proxy certificate 
will likely be installed in your browser, so it will just automatically pass 
the usual SSL checks, and will appear to you as an authenticated site. If you 
have an expectation that your web traffic will not be eavesdropped en route, 
then the sudden appearance of a proxy can flout that expectation.


On the other hand, a system which checks /only/ that the certificate 
fingerprint is what you expect it to be does not suffer from the same 
disadvantage. This is a technical difference. There's more to it than just the 
color of the warning sign! (...though I do concede, a Red Alert siren would 
probably get more attention :-) ).


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-05-03 Thread Steven M. Bellovin
On Fri, 2 May 2008 08:33:19 +0100
Arcane Jill [EMAIL PROTECTED] wrote:

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Ryan Phillips
 Sent: 28 April 2008 23:13
 To: Cryptography
 Subject: SSL and Malicious Hardware/Software
 
  I can't think of a great way of alerting the user,
 
 I would be alerted immediately, because I'm using the Petname Tool
 Firefox plugin.
 
 For an unproxied site, I get a small green window with my own choice
 of text in it (e.g. Gmail if I'm visiting https://mail.google.com).
 If a proxy were to insert itself in the middle, that window would
 turn yellow, and the message would change to (untrusted).
 
Assorted user studies suggest that most users do not notice the color
of random little windows in their browsers...


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-04-29 Thread Victor Duchovni
On Mon, Apr 28, 2008 at 03:12:31PM -0700, Ryan Phillips wrote:

 What are people's opinions on corporations using this tactic?  I can't
 think of a great way of alerting the user, but I would expect a pretty
 reasonable level of privacy while using an SSL connection at work.  

Expectations of privacy at work vary by jurisdiction and industry. In
the US, and say in the financial services industry, any such expectations
are groundless (IANAL).

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-04-29 Thread Leichter, Jerry
On Mon, 28 Apr 2008, Ryan Phillips wrote:
| Matt's blog post [1] gets to the heart of the matter of what we can
| trust.
| 
| I may have missed the discussion, but I ran across Netronome's 'SSL
| Inspector' appliance [2] today and with the recent discussion on this
| list regarding malicious hardware, I find this appliance appalling.
It's not the first.  Blue Coat, a company that's been building various
Web optimization/filtering appliances for 12 years, does the same thing.
I'm sure there are others.

| Basically a corporation can inject a SSL Trusted CA key in the
| keystore within their corporate operating system image and have this
| device generate a new server certificate to every SSL enabled website,
| signed by the Trusted CA, and handed to the client.  The client does a
| validation check and trusts the generated certificate, since the CA is
| trusted.  A very nice man-in-the-middle and would trick most casual
| computer users.
| 
| I'm guessing these bogus certificates can be forged to look like the
| real thing, but only differ by the fingerprint and root CA that was
| used to sign it.
|
| What are people's opinions on corporations using this tactic?  I can't
| think of a great way of alerting the user, but I would expect a pretty
| reasonable level of privacy while using an SSL connection at work.
I'm very uncomfortable with the whole business.

Corporations will of course tell you it's their equipment and is there
for business purposes, and you have no expectation of privacy while
using it.  I can understand the issues they face:  Between various
regulatory laws that impinge on the white-hot topic of data leakage
and issues of workplace discrimination arising out of questionable
sites, they are under a great deal of pressure to control what goes over
their networks.  But if monitoring everything is the stance they have to
take, I would rather that they simply block encrypted connections
entirely.

As this stuff gets rolled out, there *will* be legal issues.  On the
one hand, the whole industry is telling you HTTPS to a secure web
site - see that green bar in your browser? - is secure and private.
On the other, your employer is doing a man-in-the-middle attack and,
without your knowing it, reading your discussions with your doctor.
Or maybe gaining access to your credit card accounts - and who knows
who in the IT department might be able to sneak a peak.

Careful companies will target these appliances at particular sites.
They'll want to be able to prove that they aren't watching you order
your medications on line, lest they run into ADA problems, for example.

It's going to be very interesting to see how this all plays out.  We've
got two major trends crashing headlong into each other.  One is toward
tighter and tighter control over what goes on on a company's machines
and networks, some of it forced by regulation, some of it because we
can.  The other is the growing technological workarounds.  If I don't
like the rules on my company's network, I can buy over-the-air broadband
service and use it from my desk.  It's still too expensive for most
people today, but the price will come down rapidly.  Corporate IT will
try to close up machines to make that harder and harder to do, but at
the same time there's a growing push for IT to get out of the business
of buying, financing, and maintaining rapidly depreciating laptops.
Better to give employees a stipend and let them buy what they want -
and carry the risks.
-- Jerry


| Regards,
| Ryan
| 
| [1] http://www.crypto.com/blog/hardware_security/
| [2] http://www.netronome.com/web/guest/products/ssl_appliance

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-04-29 Thread Jack Lloyd
On Mon, Apr 28, 2008 at 10:03:38PM -0400, Victor Duchovni wrote:
 On Mon, Apr 28, 2008 at 03:12:31PM -0700, Ryan Phillips wrote:
 
  What are people's opinions on corporations using this tactic?  I can't
  think of a great way of alerting the user, but I would expect a pretty
  reasonable level of privacy while using an SSL connection at work.  
 
 Expectations of privacy at work vary by jurisdiction and industry. In
 the US, and say in the financial services industry, any such expectations
 are groundless (IANAL).

Most places I have worked (all in the US) explicitly required consent
to more or less arbitrary amounts of monitoring as a condition of
employment.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread sjk

Ed Gerck wrote:

List,

I would like to address and request comments on the use of SSL/TLS and 
port 587 for email security.


The often expressed idea that SSL/TLS and port 587 are somehow able to 
prevent warrantless wiretapping and so on, or protect any private 
communications, is IMO simply not supported by facts.


Warrantless wiretapping and so on, and private communications 
eavesdropping are done more efficiently and covertly directly at the 
ISPs (hence the name warrantless wiretapping), where SSL/TLS 
protection does NOT apply. There is a security gap at every negotiated 
SSL/TLS session.


It is misleading to claim that port 587 solves the security problem of 
email eavesdropping, and gives people a false sense of security. It is 
worse than using a 56-bit DES key -- the email is in plaintext where it 
is most vulnerable.


Perhaps you'd like to expand upon this a bit. I am a bit confused by 
your assertion. tcp/587 is the standard authenticated submission port, 
while tcp/465 is the normal smtp/ssl port - of course one could run any 
mix of one or the other on either port. Are you suggesting that some 
postmasters/admins are claiming that their Submission ports are encrypted?


--

[EMAIL PROTECTED]
fingerprint: 1024D/89420B8E 2001-09-16

No one can understand the truth until
he drinks of coffee's frothy goodness.
~~Sheik Abd-al-Kadir

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Sidney Markowitz

Ed Gerck wrote, On 23/1/08 7:38 AM:

The often expressed idea that SSL/TLS and port 587 are somehow able to prevent
warrantless wiretapping and so on, or protect any private communications, is 
IMO simply
not supported by facts.


I would like to see some facts to support the assertion that the idea that SSL/TLS and 
port 587 are somehow able to prevent warrantless wiretapping is often expressed.


A Google search for
 ssl port 587 warrantless wiretapping
got exactly one hit, which was your posting to the mailing list where it had been archived 
on security-basic.blogspot.com and snarfed up by Google within the hour.


(As an aside, see Google Taking Blog Comments Searching Real-Time? 
http://www.groklaw.net/article.php?story=20080122132516514 for a discussion of this 
remarkable update to their search engine).


 Sidney Markowitz
 http://www.sidney.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Florian Weimer
* Ed Gerck:

 The often expressed idea that SSL/TLS and port 587 are somehow able
 to prevent warrantless wiretapping and so on, or protect any private
 communications, is IMO simply not supported by facts.

Huh?  Have you got a source for that?  This is he first time I've
heard of such claims.

Message submission over 587/TCP gives the receiver more leeway
regarding adjusting message contents to police (add a message ID,
check the Date and From headers, and so on).  The abuse management
contract is also different: once you accept a message over 587/TCP,
it's your fault (and your fault alone) if this message turns out to be
spam.  There's nothing related to confidentiality that I know of.

-- 
Florian Weimer[EMAIL PROTECTED]
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Steven M. Bellovin
On Tue, 22 Jan 2008 10:38:24 -0800
Ed Gerck [EMAIL PROTECTED] wrote:

 List,
 
 I would like to address and request comments on the use of SSL/TLS
 and port 587 for email security.
 
 The often expressed idea that SSL/TLS and port 587 are somehow able
 to prevent warrantless wiretapping and so on, or protect any private
 communications, is IMO simply not supported by facts.
 
 Warrantless wiretapping and so on, and private communications
 eavesdropping are done more efficiently and covertly directly at the
 ISPs (hence the name warrantless wiretapping), where SSL/TLS
 protection does NOT apply. There is a security gap at every
 negotiated SSL/TLS session.
 
 It is misleading to claim that port 587 solves the security problem
 of email eavesdropping, and gives people a false sense of security.
 It is worse than using a 56-bit DES key -- the email is in plaintext
 where it is most vulnerable.
 
This is old news.  But what's your threat model?

Clearly, hop-by-hop encryption, be it port 587 to your ISP's submission
server or pop3s/imaps by the recipient to his/her mail server does
nothing to protect against someone who has hacked the server.  I wrote
about that years ago; see
http://www.cs.columbia.edu/~smb/securemail.html (which archive.org
dates to April 1999, under my old ATT URL), and I don't claim the
insight was novel even then.  Port 587 was defined in RFC 2476, from
1998; it specifically talks about the need for encryption.  SMTP-AUTH is
defined in RFC 2487 (Jan 1999 -- again, before my page), which
specifically warns that TLS protection of the channel isn't sufficient
against some threats.  (Aside: my page was prompted by someone on a
sensitive internal project who asked if he should encrypt his email.
After poking around a bit, I used xmessage to pop up a message on his
screen saying that there wasn't much point to encryption unless he
cleaned up a lot of other security issues...)  But note that the logic
applies about as well to end-to-end encryption, if your attacker can
hack the machine at either end.  By hack I specifically include black
bag jobs to plant a keystroke logger or the like.

So -- is encryption, whether hop-by-hop or end-to-end, useless?  No, of
course not.  Encrypting email submission or retrieval is very useful if
you use, say, wireless hotspots.  (Caveats and cautions here are left
as an exercise for the reader.)  End-to-end encryption guards against
rogue administrators of mail servers.  Neither protects against all
threats -- but both have their uses.

Amateurs talk about algorithms; pros talk about economics.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Paul Hoffman

At 10:38 AM -0800 1/22/08, Ed Gerck wrote:
The often expressed idea that SSL/TLS and port 587 are somehow able 
to prevent warrantless wiretapping and so on, or protect any private 
communications, is IMO simply not supported by facts.


Can you point to some sources of this often expressed idea? It 
seems like a pretty flimsy straw man.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS and port 587

2008-01-23 Thread Dave Korn
On 22 January 2008 18:38, Ed Gerck wrote:

 It is misleading to claim that port 587 solves the security problem of
 email eavesdropping, and gives people a false sense of security. It is
 worse than using a 56-bit DES key -- the email is in plaintext where it is
 most vulnerable.   

  Well, yes: it would be misleading to claim that end-to-end security protects
you against an insecure or hostile endpoint.  But it's a truism, and it's not
right to say that there is a security gap that is any part of the remit of
SSL/TLS to alleviate; the insecurity - the untrusted endpoint - is the same
regardless of whether you use end-to-end security or not.

  It's probably also not inaccurate to say that SSL/TLS protects you against
warrantless wiretapping; the warrantless wiretap program is implemented by
mass surveillance of backbone traffic, even AT+T doesn't actually forward the
traffic to their mail servers, decrypt it and then send it back to the tap
point - as far as we know.  When the spooks want your traffic as decrypted by
your ISP server, that's when they *do* go get a warrant, but the broad mass
warrantless wiretapping program is just that, and it'd done by sniffing the
traffic in the middle.  SSL/TLS *does* protect you against that, and the only
time it won't is if you're singled out for investigation.

  This is not to say that it wouldn't be possible for all ISPs to collaborate
with the TLAs to log, sniff or forward the decrypted traffic from their
servers, but if they can't even set up central tapping at a couple of core
transit sites of one ISP without someone spilling the beans, it seems
improbable that every ISP everywhere is sending them copies of all the traffic
from every server...

cheers,
  DaveK
-- 

Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Bodo Moeller wrote:

You don't take into account the many users these days who use wireless
Internet access from their laptop computers, typically essentially
broadcasting all network data to whoever is sufficiently close and
sufficiently nosy. 


Yes. Caveats apply but SSL/TLS is useful and simple for this purpose.


Of course using SSL/TLS for e-mail security does
not *solve* the problem of e-mail eavesdropping (unless special care
is taken within a closed group of users), but it certainly plays an
important role in countering eavesdropping in some relevant scenarios.


The problem is when it is generalized from the particular case where
it helps (above) to general use, and as a solution to prevent wireless
wiretapping. For example, as in this comment from a data center/network
provider:

-
Now, personally, with all the publicly available info regarding
warrantless wiretapping and so on, why any private communications should
be in the clear I just don't know. Even my MTA offers up SSL or TLS to
other MTA's when advertising its capabilities. The RFC is there, use it
as they say.
-

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Steven M. Bellovin
On Tue, 22 Jan 2008 21:49:32 -0800
Ed Gerck [EMAIL PROTECTED] wrote:

 As I commented in the
 second paragraph, an attack at the ISP (where SSL/TLS is
 of no help) has been the dominant threat -- and that is
 why one of the main problems is called warrantless
 wiretapping. Further, because US law does /not/ protect
 data at rest, anyone claiming authorized process (which
 the ISP itself may) can eavesdrop without any required
 formality.
 
Please justify this.  Email stored at the ISP is protected in the U.S.
by the Stored Communications Act, 18 USC 2701
(http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
well-drafted piece of legislation and has been the subject of much
litigation, from the Steve Jackson Games case
(http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
(http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I don't
see how you can say stored email isn't protected at all.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Paul Hoffman

At 9:49 PM -0800 1/22/08, Ed Gerck wrote:
Can you point to some sources of this often expressed idea? It 
seems like a pretty flimsy straw man.


It is common with those who think that the threat model is
traversing the public Internet.


I'll take that as a no.


For examples on claiming that SSL/TLS can protect email
privacy,


That's not what I asked, of course.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Steven M. Bellovin wrote:

On Tue, 22 Jan 2008 21:49:32 -0800
Ed Gerck [EMAIL PROTECTED] wrote:


As I commented in the
second paragraph, an attack at the ISP (where SSL/TLS is
of no help) has been the dominant threat -- and that is
why one of the main problems is called warrantless
wiretapping. Further, because US law does /not/ protect
data at rest, anyone claiming authorized process (which
the ISP itself may) can eavesdrop without any required
formality.


Please justify this.  Email stored at the ISP is protected in the U.S.
by the Stored Communications Act, 18 USC 2701
(http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
well-drafted piece of legislation and has been the subject of much
litigation, from the Steve Jackson Games case
(http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
(http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I don't
see how you can say stored email isn't protected at all.


As you wrote in your blog, users really need to read those boring
[ISP] licenses carefully.

ISP service terms grant the disclosure right on the basis of
something broadly called valid legal process or any such
term as defined /by the ISP/. Management access to the account
(including email data) is a valid legal process (authorized by the
service terms as a private contract) that can be used without
any required formality, for example to verify compliance to the
service terms or something else [1].

Frequently, common sense and standard use are used to
justify such access but, technically, no justification is
actually needed.

Further, when an ISP such as google says Google does not share
or reveal email content or personal information with third
parties. one usually forgets that (1) third parties may actually
mean everyone on the planet but you; (2) third parties also
have third parties; and (3) #2 is recursive.

Mr. Councilman's case and his lawyer's declaration that Congress
recognized that any time you store communication, there is an
inherent loss of privacy was not in your blog, though. Did I
miss something?

Cheers,
Ed Gerck

[1] in http://mail.google.com/mail/help/about_privacy.html :
Of course, the law and common sense dictate some exceptions. These exceptions include 
requests by users that Google's support staff access their email messages in order to 
diagnose problems; when Google is required by law to do so; and when we are compelled to 
disclose personal information because we reasonably believe it's necessary in order to 
protect the rights, property or safety of Google, its users and the public. For full 
details, please refer to the When we may disclose your personal information 
section of our privacy policy. These exceptions are standard across the industry and are 
necessary for email providers to assist their users and to meet legal requirements.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Steven M. Bellovin
On Wed, 23 Jan 2008 08:10:01 -0800
Ed Gerck [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  On Tue, 22 Jan 2008 21:49:32 -0800
  Ed Gerck [EMAIL PROTECTED] wrote:
   As I commented in the
  second paragraph, an attack at the ISP (where SSL/TLS is
  of no help) has been the dominant threat -- and that is
  why one of the main problems is called warrantless
  wiretapping. Further, because US law does /not/ protect
  data at rest, anyone claiming authorized process (which
  the ISP itself may) can eavesdrop without any required
  formality.
 
  Please justify this.  Email stored at the ISP is protected in the
  U.S. by the Stored Communications Act, 18 USC 2701
  (http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
  well-drafted piece of legislation and has been the subject of much
  litigation, from the Steve Jackson Games case
  (http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
  (http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I
  don't see how you can say stored email isn't protected at all.
 
 As you wrote in your blog, users really need to read those boring
 [ISP] licenses carefully.
 
 ISP service terms grant the disclosure right on the basis of
 something broadly called valid legal process or any such
 term as defined /by the ISP/. Management access to the account
 (including email data) is a valid legal process (authorized by the
 service terms as a private contract) that can be used without
 any required formality, for example to verify compliance to the
 service terms or something else [1].
 
 Frequently, common sense and standard use are used to
 justify such access but, technically, no justification is
 actually needed.
 
 Further, when an ISP such as google says Google does not share
 or reveal email content or personal information with third
 parties. one usually forgets that (1) third parties may actually
 mean everyone on the planet but you; (2) third parties also
 have third parties; and (3) #2 is recursive.

You're confusing two concepts.  Warrants apply to government
behavior; terming something a wireless wiretap carries the clear
implication of government action.  Private action may or may not
violate the wiretap act or the Stored Communications Act, but it has
nothing to do with warrants.
 
 Mr. Councilman's case and his lawyer's declaration that Congress
 recognized that any time you store communication, there is an
 inherent loss of privacy was not in your blog, though. Did I
 miss something?

Since the Councilman case took place several years before I started my
blog, it's hardly surprising that I didn't blog on it.  And it turns out
that Councilman -- see http://epic.org/privacy/councilman/ for a
summary -- isn't very interesting any more.  The original district
court ruling, upheld by three judges of the Court of Appeals,
significantly weakened privacy protections for email.  It was indeed an
important and controversial ruling.  However, case was reheard en banc;
the full court ruled that the earlier decisions were incorrect, which
left previous interpretations of the wiretap law intact.  As far as I
can tell, it was never appealed to the Supreme Court.  (The ultimate
outcome, which isn't very interesting to this list, is discussed in
http://pacer.mad.uscourts.gov/dc/opinions/ponsor/pdf/councilman%20mo.pdf)

You are, of course, quite correct that ISP terms of service need to be
read carefully.

 
 Cheers,
 Ed Gerck
 
 [1] in http://mail.google.com/mail/help/about_privacy.html :
 Of course, the law and common sense dictate some exceptions. These
 exceptions include requests by users that Google's support staff
 access their email messages in order to diagnose problems; when
 Google is required by law to do so; and when we are compelled to
 disclose personal information because we reasonably believe it's
 necessary in order to protect the rights, property or safety of
 Google, its users and the public. For full details, please refer to
 the When we may disclose your personal information section of our
 privacy policy. These exceptions are standard across the industry and
 are necessary for email providers to assist their users and to meet
 legal requirements.



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Ed Gerck

Steven M. Bellovin wrote:

You're confusing two concepts.  Warrants apply to government
behavior; terming something a wireless wiretap carries the clear
implication of government action.  Private action may or may not
violate the wiretap act or the Stored Communications Act, but it has
nothing to do with warrants.


First, there is no confusion here; I was simply addressing both
issues as in my original question to the list:

  The often expressed idea that SSL/TLS and port 587 are
  somehow able to prevent warrantless wiretapping and so on, or
  protect any private communications, is IMO simply not
  supported by facts.

Second, those two issues are not as orthogonal as one might
think. After all, an ISP is already collaborating in the
case of a warrantless wiretap. So, where would the tap
take place:

1. where the email is encrypted, or
2. where the email is not encrypted.

Considering the objective of the tap, and the expenses incurred
to do it, it seems quite improbable to choose #1.

Thanks for Mr. Councilman's case update. I mentioned it only
because it shows what does happen and the economic motivations
for it, none of which could have been prevented by SSL/TLS
protecting email submission.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Victor Duchovni
On Tue, Jan 22, 2008 at 10:38:24AM -0800, Ed Gerck wrote:

 List,
 
 I would like to address and request comments on the use of SSL/TLS and port 
 587 for email security.
 
 The often expressed idea that SSL/TLS and port 587 are somehow able to 
 prevent warrantless wiretapping and so on, or protect any private 
 communications, is IMO simply not supported by facts.

Nothing of the sort, TLS on port 587 protects replayable *authentication*
mechanisms, suchs as PLAIN and LOGIN. It can also allow the client to
authenticate the server (X.509v3 cert) and preclude MITM attacks on
mail submission. I've not seen any reputable parties claiming that TLS
submission is protection against intercepts.

I maintain the TLS code for Postfix, the documentation does not anywhere
make such claims. However we do support TLS sensitive SASL mechanism
selection:

http://www.postfix.org/postconf.5.html#smtpd_tls_auth_only
http://www.postfix.org/postconf.5.html#smtp_sasl_tls_security_options

which is highly suggestive of using TLS to protect plain-text passwords
in flight.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL certificates for SMTP

2007-05-24 Thread Peter Saint-Andre

Paul Hoffman wrote:

At 6:34 PM +0200 5/23/07, Florian Weimer wrote:


But no one is issuing certificates which are suitable for use with
SMTP (in the sense that the CA provides a security benefit).


No one? I thought that VeriSign and others did, at least a few years ago.


FWIW, last year we established a dedicated Intermediate Certification 
Authority for issuing digital certificates to admins of XMPP servers:


https://www.xmpp.net/

Peter

--
Peter Saint-Andre
XMPP Standards Foundation
http://www.xmpp.org/xsf/people/stpeter.shtml



smime.p7s
Description: S/MIME Cryptographic Signature


Re: SSL Server needs access to raw HTTP data (Request for adivce)

2007-01-16 Thread Richard Powell
On Sun, 2007-01-14 at 21:07 +0100, Erik Tews wrote:
 Am Samstag, den 13.01.2007, 19:03 -0800 schrieb Richard Powell:
  I was hoping someone on this list could provide me with a link to a
  tool
  that would enable me to dump the raw HTTP data from a web request that
  uses SSL/HTTPS.  I have full access to the server, but not to the
  client, and I want to know exactly/precisely what the client is
  transmitting. 
 
 I think http://www.rtfm.com/ssldump/ should do the job. But this only
 works in some configurations.

I believe this only looks at the encrypted stream/protocols.  I actually
need to look at the unencrypted/decrypted data.  As I have access to the
server certs and keys, this should be possible.

Thanks
Richard


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Server needs access to raw HTTP data (Request for adivce)

2007-01-16 Thread Richard Powell
On Sat, 2007-01-13 at 19:03 -0800, Richard Powell wrote:
 I was hoping someone on this list could provide me with a link to a tool
 that would enable me to dump the raw HTTP data from a web request that
 uses SSL/HTTPS.  I have full access to the server, but not to the
 client, and I want to know exactly/precisely what the client is
 transmitting.
snip
 ... my next solution is going to
 be to hack the s_server.c file from openssl and add the necessary
 statements to dump the desired stream. 

As it turns out, getting the 1st line of the get/post was relatively
easy using s_server from openssl.  Basically, there's a BIO_gets() that
reads the 1st line of input.  All I had to do was add a BIO_dump() and
recompile.

Unfortunately, I can't figure out how to get the subsequent lines from
the client (ACCEPT, REFERER, etc...).  I assumed I could just do
BIO_gets() until zero bytes were returned, but zero bytes are always
returned after the 1st call to the function.

I suppose I'll locate an openssl list and seek help there. :)  Unless
someone happens to know the answer.

Thanks
Richard


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Server needs access to raw HTTP data (Request for adivce)

2007-01-16 Thread Richard Powell
Thanks for the responses.  I found the solution thanks to one of the
suggestions off this list.

Basically, just setup stunnel to accept the encrypted stream and forward
it to a clear server and then sniffed the stream.

Thanks again
Richard

On Sat, 2007-01-13 at 19:03 -0800, Richard Powell wrote:
 Hello,
 
 I was hoping someone on this list could provide me with a link to a tool
 that would enable me to dump the raw HTTP data from a web request that
 uses SSL/HTTPS.  I have full access to the server, but not to the
 client, and I want to know exactly/precisely what the client is
 transmitting.
 
 I've considered a few options, including
 
  eg... using apache_request_header() from php
 Need to have php installed as module, which I don't.
 Also, not sure it would give me the complete RAW stream that I want
 and didn't want to waste my time installing a test server if it
 wasn't going to fully work.
  eg... tried using openssl s_server -accept 443 -WWW -debug -msg
 This option didn't seem to display/dump the raw HTTP stream.
 I could not locate an option that would enable seeing this
 information.
 
 I've been searching google for hours for some sort of tool to no avail.
 
 If I don't find a reasonable/quick option, my next solution is going to
 be to hack the s_server.c file from openssl and add the necessary
 statements to dump the desired stream.  I'm not too excited about this
 option, but I suppose if that's the best option I have, then so be
 it.  :)
 
 Thanks in advance for any advice.
 Richard
 
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Server needs access to raw HTTP data (Request for adivce)

2007-01-14 Thread Erik Tews
Am Samstag, den 13.01.2007, 19:03 -0800 schrieb Richard Powell:
 I was hoping someone on this list could provide me with a link to a
 tool
 that would enable me to dump the raw HTTP data from a web request that
 uses SSL/HTTPS.  I have full access to the server, but not to the
 client, and I want to know exactly/precisely what the client is
 transmitting. 

I think http://www.rtfm.com/ssldump/ should do the job. But this only
works in some configurations.


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: SSL (https, really) accelerators for Linux/Apache?

2007-01-04 Thread Anne Lynn Wheeler

for lots of topic drift about fast transactions and lightweight SSL
(somewhat related to past assertions that majority of SSL use has been
e-commerce related)... recent post in thread on secure financial
transactions
http://www.garlic.com/~lynn/2007.html#28 Securing financial transactions a high 
priority for 2007

having some discussion about this news URL from today:

Faster payments should not result in weaker authentication
http://www.securitypark.co.uk/article.asp?articleid=26294CategoryID=1

... other posts in the same thread:
http://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high 
priority for 2007
http://www.garlic.com/~lynn/2007.html#6 Securing financial transactions a high 
priority for 2007
http://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high 
priority for 2007

so having done a lot of optimization on the original payment gateway
and some other SSL uses ... some of it mentioned in this thread
(to help minimize payment transaction elapsed time):
http://www.garlic.com/~lynn/2007.html#7 SSL info
http://www.garlic.com/~lynn/2007.html#15 SSL info
http://www.garlic.com/~lynn/2007.html#17 SSL info

now, in the above thread, I've discussed the possible catch-22 for
the SSL domain name certification industry 
http://www.garlic.com/~lynn/subpubkey.html#catch22


however, in the past, I've also discussed leveraging the catch-22
to implement a really lightweight SSL ... somewhat similar proposal
mentioned here in old email from 1981
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the 
network

and a couple past posts discussing really lightweight SSL in the 
context of the catch-22 scenario:

http://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet 
security hall of shame
http://www.garlic.com/~lynn/aadsm22.htm#0 GP4.3 - Growth and Fraud - Case #3 - 
Phishing
http://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh

So after the initial e-commerce activity ... there were some number of
efforts in the mid-90s to improve the internet payment technologies
...  two such activities were SET and X9A10. The financial standards
X9A10 working group had been given the requirement to preserve the
integrity of the financial infrastructure for all retail payments (not
just internet) ...  resulting X9.59
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

I had gotten ahold of the SET specification when it was first
available and did a crypto-op profile and calculated some crypto-op
performance for typical SET transactions. Some number of people
associated with SET claimed that my numbers were off by two orders of
magnitude (too large by a factor of one hundred times) ... however
when they eventually had running code ... my profile numbers were
within a couple percent of the measured numbers. On an otherwise idle
dedicated test infrastructure, a simple SET transaction was over 30
seconds elapsed time ... nearly all of that crypto-op processing.  In
a loaded infrastructure, contention and queueing delays could stretch
that out to several minutes (or longer). Besides the enormous 
processing bloat ... there was also a lot of protocol chatter and

enormous payload bloat. misc. posts:
http://www.garlic.com/~lynn/subpubkey.html#bloat

by comparison, X9.59 had to be a lightweight payload, lightweight
processing, and fast transaction that was applicable to all
environments (not just the internet).

x9.59 went for lightweight payload transaction that could complete in
a single transaction roundtrip, with strong end-to-end security
(applicable whether the transaction was in-transit or at-rest).  It
effectively substituted end-to-end strong authentication and strong
integrity for information hiding encryption. X9.59 also eliminated 
knowledge of the account number as a fraud exploit

http://www.garlic.com/~lynn/subingetrity.html#harvest

and therefor eliminated the need for the most common use of SSL for
hiding account numbers in e-commerce transactions (i.e. for really
high performance and lightweight SSL is when you don't have to do it
at all).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL (https, really) accelerators for Linux/Apache?

2007-01-02 Thread Victor Duchovni
On Tue, Jan 02, 2007 at 01:43:14PM -0500, John Ioannidis wrote:

 There is too much conflicting information out there.  Can someone
 please recommend an SSL accelerator board that they have personally
 tested and used, that works with the 2.6.* kernels and the current
 release of OpenSSL, and is actually an *accelerator* (I've used a
 board from a certain otherwise famous manufacturer that acted as a
 decelerator...).  I only need this for SSL, not for IPsec.
 

I don't have any experience with any hardware in this space, but you
should be clear about one thing:

- Are you trying to accelerate symmetric bulk crypto of the SSL
payload, or the PKI operations in a cold SSL handshake?

Depending on the application and load, and given a suitable SSL session
cache, the PKI load may be negligible. For example, traffic between two
fixed MTAs with caches on both sides only does one SSL handshake per
cache TTL and then just bulk crypto for many deliveries that reuse the
cached SSL session.

So what is your load like?

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL (https, really) accelerators for Linux/Apache?

2007-01-02 Thread Scott Mustard


On Tue, 2 Jan 2007, John Ioannidis wrote:


There is too much conflicting information out there.  Can someone
please recommend an SSL accelerator board that they have personally
tested and used, that works with the 2.6.* kernels and the current
release of OpenSSL, and is actually an *accelerator* (I've used a
board from a certain otherwise famous manufacturer that acted as a
decelerator...).  I only need this for SSL, not for IPsec.


Either of these cards would do the trick for 2.6.* kernels and 
current OpenSSL.


http://www.ncipher.com/cryptographic_hardware/ssl_acceleration/

Cheers,
Scott



Thanks,

/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL Cert Prices Notes

2006-08-11 Thread Trei, Peter
It is with some irony I note that this message from
Peter Saint-Andre failed a signature check - startcom
isn't among the trusted roots in my copy of Outlook.

Peter Trei
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Saint-Andre
Sent: Wednesday, August 09, 2006 1:05 AM
To: John Gilmore
Cc: cryptography@metzdowd.com
Subject: Re: SSL Cert Prices  Notes

[...]

Have you looked at StartCom?

https://cert.startcom.org/

Peter

--
Peter Saint-Andre
Jabber Software Foundation
http://www.jabber.org/people/stpeter.shtml


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Cert Prices Notes

2006-08-10 Thread Damien Miller
On Mon, 7 Aug 2006, John Gilmore wrote:

 Here is the latest quick update on SSL Certs. It's interesting that 
 generally prices have risen. Though ev1servers are still the best commercial 
 deal out there.
 
 The good news is that CAcert seems to be posistioned for prime time debut, 
 and you can't beat *Free*. :-)

Startcom (http://cert.startcom.org/) also does free low assurance 
certificates. 

Their CA has already been accepted for the next major release of the
Mozilla products and, unlike CAcert, has undergone an independent audit.

See also http://www.hecker.org/mozilla/ca-certificate-list for Mozilla's
list of CAs and their status.

-d


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL Cert Prices Notes

2006-08-10 Thread Peter Saint-Andre
John Gilmore wrote:
 Date: Sun, 6 Aug 2006 23:37:30 -0700 (PDT)
 From: [EMAIL PROTECTED]
 Subject: SSL Cert Notes
 
 Howdy Hackers,
 
 Here is the latest quick update on SSL Certs. It's interesting that 
 generally prices have risen. Though ev1servers are still the best commercial 
 deal out there.
 
 The good news is that CAcert seems to be posistioned for prime time debut, 

Based on my experience with CAcert, that statement strikes me as a bit
optimistic. And yes, I am a CAcert assurer (currently ranked #151) and I
follow all the mailing list discussions etc. But AFAICS, prime time is a
ways off for CAcert.

 and you can't beat *Free*. :-)
 
 SSL Certificate Authorities   VerificationSubdomains Too
   Low HighLow High
   Verisign$399$995
   Geotrust$189$349$599$1499
   Thawte  $149$199$799$1349
   Comodo / instantssl $49 $62.50  $449.95
   godaddy.com $17.99  $74.95  $179.99 $269.99
   freessl.com $69 $99 $199$349
   ev1servers  $14.95  $49
   CAcert  FreeFreeFreeFree

Have you looked at StartCom?

https://cert.startcom.org/

Peter

-- 
Peter Saint-Andre
Jabber Software Foundation
http://www.jabber.org/people/stpeter.shtml



smime.p7s
Description: S/MIME Cryptographic Signature


Re: SSL Cert Prices Notes

2006-08-10 Thread Thor Lancelot Simon
On Mon, Aug 07, 2006 at 05:12:45PM -0700, John Gilmore wrote:
 
 The good news is that CAcert seems to be posistioned for prime time debut, 
 and you can't beat *Free*. :-)

You certainly can, if slipshod practices end up _costing_
you money.

Has CAcert stopped writing certificates with no DN yet?

Has CAcert stopped writing essentially unverifiable (or,
if you prefer to think of it that way, forensics-hostile)
CN-only certificates on the basis of a single email exchange
yet?

Has CAcert stopped using MD5 in all their signatures yet?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Tom Weinstein

Ian G wrote:


But don't get me wrong - I am not saying that we should
carry out a world wide pogrom on SSL/PKI.  What I am
saying is that once we accept that listening right now
is not an issue - not a threat that is being actively
dedended against - this allows us the wiggle room to
deploy that infrastructure against phishing.

Does that make sense?
 

No, not really. Until you can show me an Internet Draft for a solution 
to phishing that requires that we give up SSL, I don't see any reason to 
do so. As a consumer, I'd be very reluctant to give up SSL for credit 
card transactions because I use it all the time and it makes me feel safer.



What matters is now:  what attacks are happening
now.  Does phishing exist, and does it take a lot of
money?  What can we do about it?
 

If you don't know what we can do about phishing, why do you think that 
getting rid of SSL is a necessary first step? You seem to be putting the 
cart in front of the horse.


--
Give a man a fire and he's warm for a day, but set | Tom Weinstein
him on fire and he's warm for the rest of his life.| [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Adam Shostack
On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
| 
| Ian G [EMAIL PROTECTED] writes:
|  Perhaps you are unaware of it because no one has chosen to make you
|  aware of it. However, sniffing is used quite frequently in cases where
|  information is not properly protected. I've personally dealt with
|  several such situations.
| 
|  This leads to a big issue.  If there are no reliable reports,
|  what are we to believe in?  Are we to believe that the
|  problem doesn't exist because there is no scientific data,
|  or are we to believe those that say I assure you it is a
|  big problem?
| [...]
|  The only way we can overcome this issue is data.
| 
| You aren't going to get it. The companies that get victimized have a
| very strong incentive not to share incident information very
| widely. However, those of us who actually make our living in the field
| generally have a pretty strong sense of what is going wrong out there.

I believe that this is changing, and that Choicepoint is the wedge.
Organizations that are under no legal obligation to report breaches
are doing so, some quite rapidly, to avoid the PR disaster that hit
Choicepoint.

That shift may lead to a change in public perceptions from breaches
are rare to the reality, which is that breaches are common.  If that
shift takes place, then companies may be more willing to share data,
and thats a good.

[...] much deleted

| Statistics and the sort of economic analysis you speak of depends on
| assumptions like statistical independence and the ability to do
| calculations. If you have no basis for calculation and statistical
| independence doesn't hold because your actors are not random processes
| but intelligent actors, the method is worthless.
| 
| In most cases, by the way, the raw cost of attempting a cost benefit
| analysis will cost far more than just implementing a safeguard. A
| couple thou for encrypting a link or buying an SSL card is a lot
| cheaper than the consulting hours, and the output of the hours would
| be an utterly worthless analysis anyway.

So, that may be the case when you're dealing with an SSL accelerator,
but there are lots of other cases, say, implementing daabase security
rules, or ensuring that non-transactional lookups are logged, which
are harder to argue for, take more time and energy to implement, and
may well entail not implementing customer-visible features to get them
in on budget. 

Choicepoint and Lexis Nexis seemingly, had neither.  Nor are they
representational.   We lack good data, and while there are a few
hundred folks who have the experience, chops, and savvy to help their
customers make good decisions, there are tens of thousands of
companies, many of whom choose not to pay rates for that sort of
advice, and hire an MCSE, instead.  People who slap the label best
practice on log truncation.

I think that we need to promulgate the idea that Choicepoint is
creating a shift, that it will be ok to talk about breaches, with the
intent of getting better data over time.

Adam




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Ian G
Ahh-oops!  That particular reply was scrappily written
late at night and wasn't meant to be sent!  Apologies
belatedly, I'd since actually come to the conclusion
that Steve's statement was strictly correct, in that
we won't ever *see* sniffing because SSL is in place,
whereas I interpreted this incorrectly perhaps as
SSL *stopped* sniffing.  Subtle distinctions can
sometimes matter.

So please ignore the previous email, unless a cruel
and unusual punishment is demanded...

iang


On Wednesday 01 June 2005 16:24, Ian G wrote:
 On Tuesday 31 May 2005 19:38, Steven M. Bellovin wrote:
  In message [EMAIL PROTECTED], Ian G writes:
  On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
   In message [EMAIL PROTECTED], James A. Donald 
writes:
   --
   PKI was designed to defeat man in the middle attacks
   based on network sniffing, or DNS hijacking, which
   turned out to be less of a threat than expected.
  
   First, you mean the Web PKI, not PKI in general.
  
   The next part of this is circular reasoning.  We don't see network
   sniffing for credit card numbers *because* we have SSL.
  
  I think you meant to write that James' reasoning is
  circular, but strangely, your reasoning is at least as
  unfounded - correlation not causality.  And I think
  the evidence is pretty much against any causality,
  although this will be something that is hard to show,
  in the absence.
 
  Given the prevalance of password sniffers as early as 1993, and given
  that credit card number sniffing is technically easier -- credit card
  numbers will tend to be in a single packet, and comprise a
  self-checking string, I stand by my statement.

 Well, I'm not arguing it is technically hard.  It's just
 un-economic.  In the same sense that it is not technically
 difficult for us to get in a car and go run someone
 over;  but we still don't do it.  And we don't ban the
 roads nor insist on our butlers walking with a red
 flag in front of the car, either.  Well, not any more.

 So I stand by my statement - correlation is not causality.

   * AFAICS, a non-trivial proportion of credit
  card traffic occurs over totally unprotected
  traffic, and that has never been sniffed as far as
  anyone has ever reported.  (By this I mean lots of
  small merchants with MOTO accounts that don't
  bother to set up proper SSL servers.)
 
  Given what a small percentage of ecommerce goes to those sites, I don't
  think it's really noticeable.

 Exactly my point.  Sniffing isn't noticeable.  Neither
 in the cases we know it could happen, nor in the
 areas.  The one place where it has been noticed is
 with passwords and what we know from that experience
 is that even the slightest security works to overcome
 that threat.  SSH is overkill, compared to the passwords
 mailouts that successfully protect online password sites.

   * We know that from our experiences
  of the wireless 802.11 crypto - even though we've
  got repeated breaks and the FBI even demonstrating
  how to break it, and the majority of people don't even
  bother to turn on the crypto, there remains practically
  zero evidence that anyone is listening.
  
FBI tells you how to do it:
https://www.financialcryptography.com/mt/archives/000476.
 
  Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.

 SSH just works - and it worked directly against the
 threat you listed above (password sniffing).  But it
 has no PKI to speak of, and this discussion is about
 whether PKI protects people, because it is PKI that is
 supposed to protect against spoofing - a.k.a. phishing.

 And it is PKI that makes SSL just doesn't set up.
 Anyone who's ever had to set up an Apache web
 server for SSL has to have asked themselves the
 question ... why doesn't this just work ?

  As
  for your assertion that no one is listening, I'm not sure what kind of
  evidence you'd seek.  There's plenty of evidence that people abuse
  unprotected access points to gain connectivity.

 Simply, evidence that people are listening.  Sniffing
 by means of the wire.

 Evidence that people abuse to gain unprotected
 access is nothing to do with sniffing traffic to steal
 information.  That's theft of access, which is a fairly
 minor issue, especially as it doesn't have any
 economic damages worth speaking of.  In fact,
 many cases seem to be more accidental access
 where neighbours end up using each other's access
 points because the software doesn't know where the
 property lines are.

   Since many of
   the worm-spread pieces of spyware incorporate sniffers, I'd say that
   part of the threat model is correct.
  
  But this is totally incorrect!  The spyware installs on the
  users' machines, and thus does not need to sniff the
  wire.  The assumption of SSL is (as written up in Eric's
  fine book) that the wire is insecure and the node is
  secure, and if the node is insecure then we are sunk.
 
  I meant precisely what I said and I stand by my statement.  I'm quite
  well aware of the 

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Ian G
On Thursday 02 June 2005 11:33, Birger Tödtmann wrote:
 Am Mittwoch, den 01.06.2005, 15:23 +0100 schrieb Ian G:
 [...]

  For an example of the latter, look at Netcraft.  This is
  quite serious - they are putting out a tool that totally
  bypasses PKI/SSL in securing browsing.  Is it insecure?
  Yes of course, and it leaks my data like a seive as
  one PKI guy said.

 [...]

 What I currently fail see is the link to SSL.  Or, to its PKI model.

That's the point.  There is no link to SSL or PKI.
The only thing in common is the objective - to
protect the user when browsing.  Secure browsing
is now being offered by centralised database sans
crypto.

 Netcraft bypasses it, but I won't use Netcraft exclusively because I'm
 happy to use the crypto in SSL.  Netcraft and Trustbar are really nice
 add-ons to improve my security *with SSL*.  So where is the point?

Sure, I think it is a piece of junk, myself.  But I
am not important, I'm not an average user.
The only thing that is important is what the user
thinks and does.

When Netcraft announced their plugin had been
ported from IE to Firefox last week, they also
revealed that they had 60,000 downloads in
hours.  That tells us a few things.

Firstly, users want protection from phishing.

Secondly, Netcraft have succeeded enough
in the IE world in creating a user base for their
solution that it easily jumped across to the
Firefox userbase and scored impressive numbers
straight away.  Which tells us that it actually
delivers something useful (which may or may
not be security).  So we cannot discount that
the centralised database concept works well
enough by some measure or other.

So now we wait to see which model wins in
protecting the user from spoofing.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-02 Thread Anne Lynn Wheeler

Adam Shostack wrote:

So, that may be the case when you're dealing with an SSL accelerator,
but there are lots of other cases, say, implementing daabase security
rules, or ensuring that non-transactional lookups are logged, which
are harder to argue for, take more time and energy to implement, and
may well entail not implementing customer-visible features to get them
in on budget. 


Choicepoint and Lexis Nexis seemingly, had neither.  Nor are they
representational.   We lack good data, and while there are a few
hundred folks who have the experience, chops, and savvy to help their
customers make good decisions, there are tens of thousands of
companies, many of whom choose not to pay rates for that sort of
advice, and hire an MCSE, instead.  People who slap the label best
practice on log truncation.

I think that we need to promulgate the idea that Choicepoint is
creating a shift, that it will be ok to talk about breaches, with the
intent of getting better data over time.


we got brought in to work on some word smithing for both the cal. state 
and the fed. digital signature legislation (we somewhat concentrated on 
the distinction between digital signature authentication and that human 
signature implies read, understands, agrees, approves, authorizes, etc 
 which isn't present in simple authentication).


one of the industry groups that was active in the effort had done some 
extensive surveys on driving factors behind various kinds of regulatory 
and legislative actions. with regard to privacy regulatory/legislative 
actions ... the two main driving factors were 1) identity theft and 2) 
effectively institutional (gov, commercial, etc) denial of service.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Daniel Carosone
On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
  So we need to see a Choicepoint for listening and sniffing and so
  forth.
 
 No, we really don't.

Perhaps we do - not so much as a source of hard statistical data, but
as a source of hard pain.

People making (uninformed or ill-considered, despite our best efforts
to inform) business and risk decisions seemingly need concrete
examples to avoid.

Its depressing how much of what we actually achieve is determined by
primitive pain response reflexes - even when you're in the beneficial
position of having past insistences validated by the pain of others.

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor at managing complexity, and that
 human error is extremely pervasive. I've yet to sit in a conference
 room and think oh, if I only had more statistical data, but I've
 frequently been frustrated by gross incompetence.

Amen.

--
Dan.


pgppCusu69AQW.pgp
Description: PGP signature


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Perry E. Metzger

Daniel Carosone [EMAIL PROTECTED] writes:
 On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
  So we need to see a Choicepoint for listening and sniffing and so
  forth.
 
 No, we really don't.

 Perhaps we do - not so much as a source of hard statistical data, but
 as a source of hard pain.

That might not be such a bad thing. Object lessons have a way of
whipping people in to shape. A few more heads rolling might convince
others that security isn't optional.

In the late 1960s, several major brokerage firms went under because
they didn't have their accounting systems sufficiently automated. The
people on the business people thought of I.T. as a necessary evil
rather than as the backbone of their business, and they paid the
price.

At intervals, business gets major accounting scandals, about every 20
to 40 years when people forget about the last set. I suspect
I.T. crises are similar. It has been so long since the last one
happened in the financial industry that the institutional memory of it
is now gone, so we're ripe for another.

It is my prediction that we will, in the next five years, get the
failure of a couple of international financial institutions because of
insufficient attention to systems security, again because there are a
few executives in the business who do not understand that I.T. is not
an expense that needs managing but rather the nervous system of the
company.

 People making (uninformed or ill-considered, despite our best efforts
 to inform) business and risk decisions seemingly need concrete
 examples to avoid.

Indeed.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Wednesday 01 June 2005 10:35, Birger Tödtmann wrote:
 Am Dienstag, den 31.05.2005, 18:31 +0100 schrieb Ian G:
 [...]

  As an alternate hypothesis, credit cards are not
  sniffed and never will be sniffed simply because
  that is not economic.  If you can hack a database
  and lift 10,000++ credit card numbers, or simply
  buy the info from some insider, why would an
  attacker ever bother to try and sniff the wire to
  pick up one credit card number at a time?

 [...]

 And never will be...?  Not being economic today does not mean it
 couldn't be economic tomorrow.  Today it's far more economic to lift
 data-in-rest because it's fairly easy to get on an insider or break into
 the database itself.

Right, so we are agreed that listening to credit cards
is not an economic attack - regardless of the presence
of SSL.

Now, the point of this is somewhat subtle.  It is not
that you should turn off SSL.

The point is this:  you *could*
turn off SSL and it wouldn't make much difference
to actual security in the short term at least, and maybe
not even in the long term depending on the economic
shifts.

OK, so, are we agreed on that:  we *could* turn off
SSL, but that isn't the same thing as should* ?

If we've got that far we can go to the next step.

If we *could* turn off SSL then we have some breathing
space, some room to manouvre.  Some wiggle room.

Which means we could modify the model.  Which
means we could change the model, we could tune
the crypto or the PKI.  And in the short term, that
would not be a problem for security because there
isn't an economic attack anyway.  Right now, at
least.

OK so far?

This means that we could improve or decrease
its strength ... as our objectives suggest ... or we
could *re-purpose* SSL if this were so desired.

So we could for example use SSL and PKI to
protect from something else.  If that were an issue.

Let's assume phishing is an issue (1.2 billion
dollars of american money is the favourite number).

If we could figure out a way to change the usage
of SSL and PKI to protect against phishing, would
that be a good idea?

It wouldn't be a bad idea, would it?  How could it
be a bad idea when the infrastructure is in place,
and is not currently being used to defeat any
attack?

So, even in a stupidly aggressive worst case
scenario, if were to turn off SSL/PKI in the process
and turn its benefit over to phishing, and discover
that it no longer protects against listening attacks
at all - remember I'm being ridiculously hypothetical
here - then as long as it did *some* benefit in
stopping phishing, that would still be a net good.

That is, there would be some phishing victims
who would thank you for saving them, and there
would *not* be any Visa merchants who would
necessarily damn your grandmother for losing
credit cards.  Not in the short term at least.

And if listening were to erupt in a frenzy in the
future it would likely be possible to turn off the
anti-phishing tasking and turn SSL/PKI back to
protecting against eavesdropping.  Perhaps as
a tradeoff between the credit card victim and
the phishing victim.

But that's just stupidly hypothetical.  The main
thing is that we can fiddle with SSL/PKI if we want
to and we can even afford to make some mistakes.

So the question then results in - could it be used
to benefit phishing?  I can point at some stuff that
says it will be.

But every time this good stuff is suggested, the
developers, cryptographers, security experts and
what have you suck air between their teeth in and
say you can't change SSL or PKI because of this
crypto blah blah reason.

My point is you can change it.  Of course you
can change it - and here's why:  it's not being
economically used over here (listening), and
right over there (phishing), there is an economic
loss waiting attention.


 However, when companies finally find some 
 countermeasures against both attack vectors, adversaries will adapt and
 recalculate the economics.  And they may very well fall back to sniffing
 for data-in-flight, just as they did (and still do sometimes now) to get
 hold of logins and passwords inside corporate networks in the 80s and
 90s.  If it's more difficult to hack into the database itself than to
 break into a small, not-so-protected system at a large network provider
 and install a sniffer there that silently collects 10,000++ credit card
 numbers over some weeks - then sniffing *is* an issue.  We have seen it,
 and we will see it again.  SSL is a very good countermeasure against
 passive eavesdropping of this kind, and a lot of data suggests that
 active attacks like MITM are seen much less frequently.


All that is absolutely true, in that we can conjecture
that if we close everything else off, then sniffing will
become economic.  That's a fair statement.

But, go and work in one of these places for a while,
or see what Perry said yesterday:

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor at 

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 23:43, Perry E. Metzger wrote:
 Ian G [EMAIL PROTECTED] writes:

Just on the narrow issue of data - I hope I've
addressed the other substantial points in the
other posts.

  The only way we can overcome this issue is data.

 You aren't going to get it. The companies that get victimized have a
 very strong incentive not to share incident information very
 widely.

On the issue of sharing data by victims, I'd strongly
recommend the paper by Schechter and Smith, FC03.
 How Much Security is Enough to Stop a Thief?
http://www.eecs.harvard.edu/~stuart/papers/fc03.pdf
I've also got a draft paper that argues the same thing
and speaks directly and contrarily to your statement:

Sharing data is part of the way towards better security.

(But I argue it from a different perspective to SS.)


 1) You have one anecdote. You really have no idea how
frequently this happens, etc.

The world for security in the USA changed dramatically
when Choicepoint hit.  Check out the data at:

http://pipeda.blogspot.com/2005/02/summaries-of-incidents-cataloged-on.html
http://www.strongauth.com/regulations/sb1386/sb1386Disclosures.html

Also, check out Adam's blog at

http://www.emergentchaos.com/

He has a whole category entitled Choicepoint for
background reading:

http://www.emergentchaos.com/archives/cat_choicepoint.html

Finally we have our data in the internal governance
and hacking breaches.  As someone said today, Amen
to that.  No more arguments, just say Choicepoint.

 2) It doesn't matter how frequently it happens, because no two
companies are identical. You can't run 100 choicepoints and see
what percentage have problems.

We all know that the attacker is active and can
change tactics.  But locksmiths still recommend
that you put a lock on your door that is a) a bit
stronger than the door and b) a bit better than your
neighbours.  Just because there are interesting
quirks and edge cases in these sciences doesn't
mean we should wipe out other aspects of our
knowledge of scientific method.

 3) If you're deciding on how to set up your firm's security, you can't
say 95% of the time no one attacks you so we won't bother, for
the same reason that you can't say if I drive my car while
slightly drunk 95% of the time I'll arrive safe, because the 95%
of the time that nothing happens doesn't matter if the cost of the
5% is so painful (like, say, death) that you can't recover from
it.

Which is true regardless of whether you are
slightly drunk or not at all or whether a few
pills had been taken or tiredness hits.

Literally, like driving when not 100% fit, the
decision maker makes a quick decision based
on what they know.  The more they know, the
better off they are.  The more data they have,
the better informed their decision.

In particular, you don't want to be someone on who's watch a 
major breech happens. Your career is over even if it never happens
to anyone else in the industry.

Sure.  Life's a bitch.  One can only do ones
best and hope it doesn't hit.  But have a read
of SS' paper, and if you still have the appetite,
try my draft:

http://iang.org/papers/market_for_silver_bullets.html

 Statistics and the sort of economic analysis you speak of depends on
 assumptions like statistical independence and the ability to do
 calculations. If you have no basis for calculation and statistical
 independence doesn't hold because your actors are not random processes
 but intelligent actors, the method is worthless.

No, that's way beyond what I was saying.

I was simply asserting one thing:  without data, we do
not know if an issue exists.  Without even a vaguely
measured sense of seeing it in enough cases to know
it is not an anomoly, we simply can't differentiate it
from all the other conspiracy theories, FUD sales,
government agendas, regulatory hobby horses,
history lessons written by victors, or what-have-you.

Ask any manager.  Go to him or her with a new
threat.  He or she will ask who has this happened
to?

If the answer is it used to happen all the time in
1994 ... then a manager could be forgiven for
deciding the data was stale.  If the answer is
no-one, then no matter how risky, the likely
answer is get out!  If the answer is these X
companies in the last month then you've got
some mileage.

Data is everything.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
Hi Birger,

Nice debate!


On Wednesday 01 June 2005 13:52, Birger Tödtmann wrote:
 Am Mittwoch, den 01.06.2005, 12:16 +0100 schrieb Ian G:
 [...]

  The point is this:  you *could*
  turn off SSL and it wouldn't make much difference
  to actual security in the short term at least, and maybe
  not even in the long term depending on the economic
  shifts.

 Which depends a bit on the scale of your could switch of.  If some
 researchers start switching it off / inventing / testing something new,
 then your favourite phisher would not care, that's right.

Right.  That's the point.  It is not a universal
and inescapable bad to fiddle with SSL/PKI.

 [...]

  But every time this good stuff is suggested, the
  developers, cryptographers, security experts and
  what have you suck air between their teeth in and
  say you can't change SSL or PKI because of this
  crypto blah blah reason.
 
  My point is you can change it.  Of course you
  can change it - and here's why:  it's not being
  economically used over here (listening), and
  right over there (phishing), there is an economic
  loss waiting attention.

 Maybe.  But there's a flip-side to that coin.  SSL and correlated
 technology helped to shift the common attack methods from sniffing (it
 was widely popular back then to install a sniffer whereever a hacker got
 his foot inside a network) towards advanced, in some sense social
 engineering attacks like phishing *because* it shifted the economics
 for the adversaries as it was more and more used to protect sensitive
 data-in-flight (and sniffing wasn't going to get him a lot of credit
 card data anymore).


OK, and that's where we get into poor use of
data.  Yes, sniffing of passwords existed back
then.  So we know that sniffing is quite possible
and on reasonable scale, plausible technically.

But the motive of sniffing back then was different.
It was for attacking boxes.  Access attack.  Not
for the purpose of theft of commercial data.  It
was a postulation that those that attacked boxes
for access would also sniff for credit cards.  But,
we think that to have been a stretch (hence the
outrageous title of this post) at least up until
recently.

Before 2004, these forces and
attackers were disconnected.  In 2004 they joined
forces.  In which case, you do now have quite a
good case that the installation of sniffers could be
used if there was nothing else worth picking up.
So at least we now have the motive cleared up,
if not the economic attack.

(Darn ... I seem to have argued your case for you ;-) )

 That this behaviour (sniffing) is a thing of the past does not mean it's
 not coming back to you if things are turned around: adversaries are
 strategically thinking people that adapt very fast to new circum-
 stances.

Indeed.  It also doesn't mean that they will come
and attack.  Maybe it is a choice between the
attack that is happening right now and the attack
that will come back.  Or maybe the choice is
not really there, maybe we can cover both if
we put our thinking caps on?

 The discussion reminds me a bit of other popular economic issues: Many
 politicians and some economists all over the world, every year, are
 coming back to asking Can't we loosen the control on inflation a bit?
 Look, inflation is a thing of the past, we never got over 3% the last
 umteenth years, lets trigger some employment by relaxing monetary
 discipline now.  The point is: it might work - but if not, your economy
 may end up in tiny little pieces.  It's quite a risk, because you cannot
 test it.  So the stance of many people is to be very conservative on
 things like that - and security folks are no exception.  Maybe fiddling
 with SSL is really a nice idea.  But if it fails at some point and we
 don't have a fallback infrastructure that's going to protect us from the
 sniffer-collector of the 90s, adversaries will be quite happy to bring
 them to new interesting uses then

Nice analogy!  Like all analogies it should be taken
for descriptive power not presecription.

The point being that one should not slavishly stick
to an argument, one needs to establish principles.
One principle is that we protect where money is being
lost, over and above somewhere where someone
says it was once lost in the past.  And at least then
we'll learn the appropriate balance when we get it
wrong, which can't be much worse than now, coz
we are getting it really wrong at the moment.

(On the monetary economics analogy, if you said your
principle was to eliminate inflation, I'd say fine!  There
is an easy way to do just that, just use gold as money,
which has maintained its value throughout recorded
history, not just the last century!  The targets debate
has been echoing on for decades, and there is no
real end in sight.)

  So I would suggest that listening for credit cards will
  never ever be an economic attack.  Sniffing for random
  credit cards at the doorsteps of amazon will never ever
  be an economic attack, not because it isn't possible,

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 19:38, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Ian G writes:
 On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
  In message [EMAIL PROTECTED], James A. Donald writes:
  --
  PKI was designed to defeat man in the middle attacks
  based on network sniffing, or DNS hijacking, which
  turned out to be less of a threat than expected.
 
  First, you mean the Web PKI, not PKI in general.
 
  The next part of this is circular reasoning.  We don't see network
  sniffing for credit card numbers *because* we have SSL.
 
 I think you meant to write that James' reasoning is
 circular, but strangely, your reasoning is at least as
 unfounded - correlation not causality.  And I think
 the evidence is pretty much against any causality,
 although this will be something that is hard to show,
 in the absence.

 Given the prevalance of password sniffers as early as 1993, and given
 that credit card number sniffing is technically easier -- credit card
 numbers will tend to be in a single packet, and comprise a
 self-checking string, I stand by my statement.


Well, I'm not arguing it is technically hard.  It's just
un-economic.  In the same sense that it is not technically
difficult for us to get in a car and go run someone
over;  but we still don't do it.  And we don't ban the
roads nor insist on our butlers walking with a red
flag in front of the car, either.  Well, not any more.

So I stand by my statement - correlation is not causality.

  * AFAICS, a non-trivial proportion of credit
 card traffic occurs over totally unprotected
 traffic, and that has never been sniffed as far as
 anyone has ever reported.  (By this I mean lots of
 small merchants with MOTO accounts that don't
 bother to set up proper SSL servers.)

 Given what a small percentage of ecommerce goes to those sites, I don't
 think it's really noticeable.


Exactly my point.  Sniffing isn't noticeable.  Neither
in the cases we know it could happen, nor in the
areas.  The one place where it has been noticed is
with passwords and what we know from that experience
is that even the slightest security works to overcome
that threat.  SSH is overkill, compared to the passwords
mailouts that successfully protect online password sites.

  * We know that from our experiences
 of the wireless 802.11 crypto - even though we've
 got repeated breaks and the FBI even demonstrating
 how to break it, and the majority of people don't even
 bother to turn on the crypto, there remains practically
 zero evidence that anyone is listening.
 
   FBI tells you how to do it:
   https://www.financialcryptography.com/mt/archives/000476.

 Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.

SSH just works - and it worked directly against the
threat you listed above (password sniffing).  But it
has no PKI to speak of, and this discussion is about
whether PKI protects people, because it is PKI that is
supposed to protect against spoofing - a.k.a. phishing.

And it is PKI that makes SSL just doesn't set up.
Anyone who's ever had to set up an Apache web
server for SSL has to have asked themselves the
question ... why doesn't this just work ?

 As 
 for your assertion that no one is listening, I'm not sure what kind of
 evidence you'd seek.  There's plenty of evidence that people abuse
 unprotected access points to gain connectivity.

Simply, evidence that people are listening.  Sniffing
by means of the wire.

Evidence that people abuse to gain unprotected
access is nothing to do with sniffing traffic to steal
information.  That's theft of access, which is a fairly
minor issue, especially as it doesn't have any
economic damages worth speaking of.  In fact,
many cases seem to be more accidental access
where neighbours end up using each other's access
points because the software doesn't know where the
property lines are.


  Since many of
  the worm-spread pieces of spyware incorporate sniffers, I'd say that
  part of the threat model is correct.
 
 But this is totally incorrect!  The spyware installs on the
 users' machines, and thus does not need to sniff the
 wire.  The assumption of SSL is (as written up in Eric's
 fine book) that the wire is insecure and the node is
 secure, and if the node is insecure then we are sunk.

 I meant precisely what I said and I stand by my statement.  I'm quite
 well aware of the difference between network sniffers and keystroke
 loggers.


OK, so maybe I am incorrectly reading this - are you
saying that spyware is being delivered that incorporates
wire sniffers?  Sniffers that listen to the ethernet traffic?

If that's the case, that is the first I've heard of it.  What
is it that these sniffers are listening for?

   Eric's book and 1.2 The Internet Threat Model
   http://iang.org/ssl/rescorla_1.html
 
 Presence of keyboard sniffing does not give us any
 evidence at all towards wire sniffing and only serves
 to further embarrass the SSL threat model.
 
  As for DNS hijacking -- that's what's 

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Ian G writes:
On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], James A. Donald writes:
 --
 PKI was designed to defeat man in the middle attacks
 based on network sniffing, or DNS hijacking, which
 turned out to be less of a threat than expected.

 First, you mean the Web PKI, not PKI in general.

 The next part of this is circular reasoning.  We don't see network
 sniffing for credit card numbers *because* we have SSL.

I think you meant to write that James' reasoning is
circular, but strangely, your reasoning is at least as
unfounded - correlation not causality.  And I think
the evidence is pretty much against any causality,
although this will be something that is hard to show,
in the absence.

Given the prevalance of password sniffers as early as 1993, and given 
that credit card number sniffing is technically easier -- credit card 
numbers will tend to be in a single packet, and comprise a 
self-checking string, I stand by my statement.

 * AFAICS, a non-trivial proportion of credit
card traffic occurs over totally unprotected
traffic, and that has never been sniffed as far as
anyone has ever reported.  (By this I mean lots of
small merchants with MOTO accounts that don't
bother to set up proper SSL servers.)

Given what a small percentage of ecommerce goes to those sites, I don't 
think it's really noticeable.

 * We know that from our experiences
of the wireless 802.11 crypto - even though we've
got repeated breaks and the FBI even demonstrating
how to break it, and the majority of people don't even
bother to turn on the crypto, there remains practically
zero evidence that anyone is listening.

  FBI tells you how to do it:
  https://www.financialcryptography.com/mt/archives/000476.

Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.  As 
for your assertion that no one is listening, I'm not sure what kind of 
evidence you'd seek.  There's plenty of evidence that people abuse 
unprotected access points to gain connectivity.

As an alternate hypothesis, credit cards are not
sniffed and never will be sniffed simply because
that is not economic.  If you can hack a database
and lift 10,000++ credit card numbers, or simply
buy the info from some insider, why would an
attacker ever bother to try and sniff the wire to
pick up one credit card number at a time?

Sure -- that's certainly the easy way to do it.

And if they did, why would we care?  Better to
let a stupid thief find a way to remove himself from
a life of crime than to channel him into a really
dangerous and expensive crime like phishing,
box cracking, and purchasing identity info from
insiders.

 Since many of 
 the worm-spread pieces of spyware incorporate sniffers, I'd say that
 part of the threat model is correct.

But this is totally incorrect!  The spyware installs on the
users' machines, and thus does not need to sniff the
wire.  The assumption of SSL is (as written up in Eric's
fine book) that the wire is insecure and the node is
secure, and if the node is insecure then we are sunk.

I meant precisely what I said and I stand by my statement.  I'm quite 
well aware of the difference between network sniffers and keystroke 
loggers.

  Eric's book and 1.2 The Internet Threat Model
  http://iang.org/ssl/rescorla_1.html

Presence of keyboard sniffing does not give us any
evidence at all towards wire sniffing and only serves
to further embarrass the SSL threat model.

 As for DNS hijacking -- that's what's behind pharming attacks.  In
 other words, it's a real threat, too.

Yes, that's being tried now too.  This is I suspect the
one area where the SSL model correctly predicted
a minor threat.  But from what I can tell, server-based
DNS hijacking isn't that successful for the obvious
reasons (attacking the ISP to get to the user is a
higher risk strategy than makes sense in phishing).

They're using cache contamination attacks, among other things.

...


As perhaps further evidence of the black mark against
so-called secure browsing, phishers still have not
bothered to acquire control-of-domain certs for $30
and use them to spoof websites over SSL.

Now, that's either evidence that $30 is too much to
pay, or that users just ignore the certs and padlocks
so it is no big deal anyway.  Either way, a model
that is bypassed so disparagingly without even a
direct attack on the PKI is not exactly recommending
itself.

I agre completely that virtually no one checks certificates (or even 
knows what they are).


--Steven M. Bellovin, http://www.cs.columbia.edu/~smb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Perry E. Metzger

Ian G [EMAIL PROTECTED] writes:
 On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
 The next part of this is circular reasoning.  We don't see network
 sniffing for credit card numbers *because* we have SSL.

 I think you meant to write that James' reasoning is
 circular, but strangely, your reasoning is at least as
 unfounded - correlation not causality.  And I think
 the evidence is pretty much against any causality,
 although this will be something that is hard to show,
 in the absence.

  * AFAICS, a non-trivial proportion of credit
 card traffic occurs over totally unprotected
 traffic, and that has never been sniffed as far as
 anyone has ever reported.

Perhaps you are unaware of it because no one has chosen to make you
aware of it. However, sniffing is used quite frequently in cases where
information is not properly protected. I've personally dealt with
several such situations.

Bluntly, it is obvious that SSL has been very successful in thwarting
certain kinds of interception attacks. I would expect that without it,
we'd see mass harvesting of credit card numbers at particularly
vulnerable parts of the network, such as in front of important
merchants. The fact that phishing and other attacks designed to force
people to disgorge authentication information has become popular is a
tribute to the fact that sniffing is not practical.

The bogus PKI infrastructure that SSL generally plugs in to is, of
course, a serious problem. Phishing attacks, pharming attacks and
other such stuff would be much harder if SSL weren't mostly used with
an unworkable fake PKI. (Indeed, I'd argue that PKI as envisioned is
unworkable.)  However, that doesn't make SSL any sort of failure -- it
has been an amazing success.

  * We know that from our experiences
 of the wireless 802.11 crypto - even though we've
 got repeated breaks and the FBI even demonstrating
 how to break it, and the majority of people don't even
 bother to turn on the crypto, there remains practically
 zero evidence that anyone is listening.

Where do you get that idea? Break-ins to firms over their unprotected
802.11 networks are not infrequent occurrences. Perhaps you're unaware
of whether anyone is listening in to your home network, but I suspect
there is very little that is interesting to listen in to on your home
network, so there is little incentive for anyone to break it.

 As for DNS hijacking -- that's what's behind pharming attacks.  In
 other words, it's a real threat, too.

 Yes, that's being tried now too.  This is I suspect the
 one area where the SSL model correctly predicted
 a minor threat.  But from what I can tell, server-based
 DNS hijacking isn't that successful for the obvious
 reasons

You are wrong there again.

Where are you getting your information from? Whomever your informant
is, they're not giving you accurate information.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Anne Lynn Wheeler

Steven M. Bellovin wrote:
Given the prevalance of password sniffers as early as 1993, and given 
that credit card number sniffing is technically easier -- credit card 
numbers will tend to be in a single packet, and comprise a 
self-checking string, I stand by my statement.


the major exploits have involved data-at-rest ... not data-in-flight. 
internet credit card sniffing can be easier than password sniffing  
but that doesn't mean that the fraud cost/benefit ratio is better than 
harvesting large transaction database files. you could possibly 
conjecture password sniffing enabling compromise/exploits of 
data-at-rest ... quick inout and may have months worth of transaction 
information all nicely organized.


to large extent SSL was used to show that internet/e-commerce wouldn't 
result in the theoritical sniffing making things worse (as opposed to 
addressing the major fraud vulnerability  treat).


internet/e-commerce did increase the threats  vulnerabilities to the 
transaction database files (data-at-rest) ... which is were the major 
threat has been. There has been a proliferation of internet merchants 
with electronic transaction database files ... where there may be 
various kinds of internet access to the databases. Even when the 
prevalent risk to these files has been from insiders ... the possibility 
of outsider compromise can still obfuscate tracking down who is actually 
responsible.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Ian G
On Tuesday 31 May 2005 21:03, Perry E. Metzger wrote:
 Ian G [EMAIL PROTECTED] writes:
  On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
  The next part of this is circular reasoning.  We don't see network
  sniffing for credit card numbers *because* we have SSL.
 
  I think you meant to write that James' reasoning is
  circular, but strangely, your reasoning is at least as
  unfounded - correlation not causality.  And I think
  the evidence is pretty much against any causality,
  although this will be something that is hard to show,
  in the absence.
 
   * AFAICS, a non-trivial proportion of credit
  card traffic occurs over totally unprotected
  traffic, and that has never been sniffed as far as
  anyone has ever reported.

 Perhaps you are unaware of it because no one has chosen to make you
 aware of it. However, sniffing is used quite frequently in cases where
 information is not properly protected. I've personally dealt with
 several such situations.


This leads to a big issue.  If there are no reliable reports,
what are we to believe in?  Are we to believe that the
problem doesn't exist because there is no scientific data,
or are we to believe those that say I assure you it is a
big problem?

It can't be the latter;  not because I don't believe you in
particular, but because the industry as a whole has not
the credibility to make such a statement.  Everyone who
makes such a statement is likely to be selling some
service designed to benefit from that statement, which
makes it very difficult to simply believe on the face of it.

The only way we can overcome this issue is data.  If
you have seen such situations, document them and
report them - on forums like these.  Anonymise them
suitably if you have to.

Another way of looking at this is to look at Choicepoint.
For years, we all suspected that the real problem was
the insider / node problem.  The company was where
the leaks occurred, traditionally.

But nobody had any data.  Until Choicepoint.  Now we
have data.  We know how big a problem the node is.
We now know that the problem inside the company is
massive.

So we need to see a Choicepoint for listening and
sniffing and so forth.  And we need that before we can
consider the listening threat to be economically validated.


 Bluntly, it is obvious that SSL has been very successful in thwarting
 certain kinds of interception attacks. I would expect that without it,
 we'd see mass harvesting of credit card numbers at particularly
 vulnerable parts of the network, such as in front of important
 merchants. The fact that phishing and other attacks designed to force
 people to disgorge authentication information has become popular is a
 tribute to the fact that sniffing is not practical.

And I'd expect to see massive email scanning by
now of say lawyer's email at ISPs.  But, no, very
little has occurred.

 The bogus PKI infrastructure that SSL generally plugs in to is, of
 course, a serious problem. Phishing attacks, pharming attacks and
 other such stuff would be much harder if SSL weren't mostly used with
 an unworkable fake PKI. (Indeed, I'd argue that PKI as envisioned is
 unworkable.)  However, that doesn't make SSL any sort of failure -- it
 has been an amazing success.

In this we agree.  Indeed, my thrust all along in
attacking PKI has been to get people to realise
that the PKI doesn't do nearly as much as people
think, and therefore it is OK to consider improving
it.  Especially, where it is weak and where attackers
are attacking.

Unfortunately, PKI and SSL are considered to be
sacrosanct and perfect by the community.  As these
two things working together are what protects people
from phishing (site spoofing) fixing them requires
people to recognise that the PKI isn't doing the job.

The cryptography community especially should get
out there and tell developers and browser implementors
that the reason phishing is taking place is that the
browser security model is being bypassed, and that
some tweaks are needed.

   * We know that from our experiences
  of the wireless 802.11 crypto - even though we've
  got repeated breaks and the FBI even demonstrating
  how to break it, and the majority of people don't even
  bother to turn on the crypto, there remains practically
  zero evidence that anyone is listening.

 Where do you get that idea? Break-ins to firms over their unprotected
 802.11 networks are not infrequent occurrences. Perhaps you're unaware
 of whether anyone is listening in to your home network, but I suspect
 there is very little that is interesting to listen in to on your home
 network, so there is little incentive for anyone to break it.

Can you distinguish between break-ins and sniffing
and listening attacks?  Break-ins, sure, I've seen a
few cases of that.  In each case the hackers tried to
break into an unprotected site that was accessible
over an unprotected 802.11.

My point though is that this attack is not listening.
It's an access attack.  So one must be careful 

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-05-31 Thread Perry E. Metzger

Ian G [EMAIL PROTECTED] writes:
 Perhaps you are unaware of it because no one has chosen to make you
 aware of it. However, sniffing is used quite frequently in cases where
 information is not properly protected. I've personally dealt with
 several such situations.

 This leads to a big issue.  If there are no reliable reports,
 what are we to believe in?  Are we to believe that the
 problem doesn't exist because there is no scientific data,
 or are we to believe those that say I assure you it is a
 big problem?
[...]
 The only way we can overcome this issue is data.

You aren't going to get it. The companies that get victimized have a
very strong incentive not to share incident information very
widely. However, those of us who actually make our living in the field
generally have a pretty strong sense of what is going wrong out there.

 It can't be the latter;  not because I don't believe you in
 particular, but because the industry as a whole has not
 the credibility to make such a statement.  Everyone who
 makes such a statement is likely to be selling some
 service designed to benefit from that statement, which
 makes it very difficult to simply believe on the face of it.

Those who work as consultants to large organizations, or as internal
security personnel at them, tend to be fairly independent of particular
vendors. I don't have any financial reason to recommend particular
firms over others, and customers generally are in a position to judge
for themselves whether what gets recommended is a good idea or not.

 If you have seen such situations, document them and report them - on
 forums like these.  Anonymise them suitably if you have to.

Many of us actually take our contract obligations not to talk about
our customers quite seriously, and in any case, anonymous anecdotal
reports about unnamed organizations aren't really data in the
traditional sense. You worry about vendors spreading FUD -- well, why
do you assume you can trust anonymous comments not to be FUD from
vendors?

You don't really need to hear much from me or others on this sort of
thing, though. Pretty much common sense and reasoning will tell you
things like the bad guys attack the weak points etc. Experience says
if you leave a vulnerability, it will be exploited eventually, so you
try not to leave any.

All the data in the world isn't going to help you anyway. We're not
talking about what percentage of patients with melanoma respond
positively to what drug. Melanomas aren't intelligent and don't change
strategy based on what other melanomas are doing. Attack strategies
change. Attackers actively alter their behavior to match conditions.

The way real security professionals have to work is analysis and
conservatism. We assume we're dumb, we assume we'll make mistakes, we
try to put in as many checks as possible to prevent single points of
failure from causing trouble. We assume machines will be broken in to
and try to minimize the impact of that. We assume some employees will
turn bad at some point and try to have things work anyway in spite of
that.

 Another way of looking at this is to look at Choicepoint.
 For years, we all suspected that the real problem was
 the insider / node problem.  The company was where
 the leaks occurred, traditionally.

 But nobody had any data.  Until Choicepoint.  Now we
 have data.

No you don't.

1) You have one anecdote. You really have no idea how
   frequently this happens, etc. 
2) It doesn't matter how frequently it happens, because no two
   companies are identical. You can't run 100 choicepoints and see
   what percentage have problems.
3) If you're deciding on how to set up your firm's security, you can't
   say 95% of the time no one attacks you so we won't bother, for
   the same reason that you can't say if I drive my car while
   slightly drunk 95% of the time I'll arrive safe, because the 95%
   of the time that nothing happens doesn't matter if the cost of the
   5% is so painful (like, say, death) that you can't recover from
   it. In particular, you don't want to be someone on who's watch a
   major breech happens. Your career is over even if it never happens
   to anyone else in the industry.
3) Most of what you have to worry about is obvious anyway. There's
   nothing really new here. We've understood that people were the main
   problem in security systems since before computer security. Ever
   wonder why accounting controls are set up the way they are? How
   long were people separating the various roles in an accounting
   system to prevent internal collusion? That goes back long before
   computers.

 So we need to see a Choicepoint for listening and sniffing and so
 forth.

No, we really don't.

 And we need that before we can consider the listening threat to be
 economically validated.

Spoken like someone who hasn't actually worked inside the field.

Statistics and the sort of economic analysis you speak of depends on
assumptions like statistical independence and the 

and constrained subordinate CA costs? (Re: SSL Cert prices ($10 to $1500, you choose!))

2005-03-25 Thread Adam Back
The URL John forwarded gives survey of prices for regular certs and
subdomain wildcard certs/super certs (ie *.mydomain.com all considered
valid with respect to a single cert).

Does anyone have info on the cost of sub-ordinate CA cert with a name
space constraint (limited to issue certs on domains which are
sub-domains of a your choice... ie only valid to issue certs on
sub-domains of foo.com).

Maybe the answer is a lot of money... CA operators probably view
users of this kind of tech to be corporations with big infrastructure
to secure.  It sounds like the http://www.thawte.com/spki/ offers this
kind of service.  However it sounds like its web based so they have
really just bundled a more streamlined way to create lots of certs.

(Thawte spki means starter PKI (not simple pki)).

Adam

On Fri, Mar 04, 2005 at 03:53:51PM -0800, John Gilmore wrote:
 For the privilege of being able to communicate securely using SSL and a
 popular web browser, you can pay anything from $10 to $1500.  Clif
 Cox researched cert prices from various vendors:
 
   http://neo.opn.org/~clif/SSL_CA_Notes.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-06 Thread Werner Koch
On Wed, 5 Jan 2005 08:49:36 +0800, Enzo Michelangeli said:

 That's basically what /dev/urandom does, no?  (Except that it has the
 undesirable side-effect of depleting the entropy estimate maintained
 inside the kernel.)

 This entropy depletion issue keeps coming up every now and then, but I
 still don't understand how it is supposed to happen. If the PRNG uses a

It is a practical issue: Using /dev/urandom to avoid waiting for a
blocked /dev/random will let other processes wait infinitely on a
blocked /dev/random.

The Linux implementation of /dev/urandom is identical to /dev/random
but instead of blocking, (as /dev/random does on a low entropy
estimation) it continues to give output by falling back to a PRNG mode
of operation.

For services with a high demand of random it is probably better to
employ its own PRNG and reseed it from /dev/random from time to time.


Salam-Shalom,

   Werner




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-04 Thread John Denker
I wrote:
If the problem is a shortage of random bits, get more random bits!
Florian Weimer responded:
We are talking about a stream of several kilobits per second on a busy
server (with suitable mailing lists, of course).  This is impossible
to obtain without special hardware.
Not very special, as I explained:
Almost every computer sold on the mass market these days has a sound
system built in. That can be used to generate industrial-strength
randomness at rates more than sufficient for the applications we're
talking about.  
How many bits per second can you produce using an off-the-shelf sound
card?  Your paper gives a number in excess of 14 kbps, if I read it
correctly, which is surprisingly high.
1) You read it correctly.
  http://www.av8n.com/turbid/paper/turbid.htm#tab-soundcards
2) The exact number depends on details of your soundcard.  14kbits/sec
was obtained from a plain-vanilla commercial-off-the-shelf desktop
system with AC'97 audio.  You can of course do worse if you try (e.g.
Creative Labs products) but it is easy to do quite a bit better.
I obtained in excess of 70kbits/sec using an IBM laptop mgfd in
1998.
3) Why should this be surprising?
It's an interesting approach, but for a mail server which mainly sends
to servers with self-signed certificates, it's overkill.  
Let's see
 -- Cost = zero.
 -- Quality = more than enough.
 -- Throughput = more than enough.
I see no reason why I should apologize for that.
Debian also
supports a few architectures for which sound cards are hard to obtain.
And we would separate desktop and server implementations because the
sound card is used on desktops.  I'd rather sacrifice forward secrecy
than to add such complexity.
As the proverb says, no matter what you're trying to do, you can always
do it wrong.  If you go looking for potholes, you can always find a
pothole to fall into if you want.
But if you're serious about solving the problem, just go solve the
problem.  It is eminently solvable;  no sacrifices required.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-04 Thread Greg Rose
At 22:51 2004-12-22 +0100, Florian Weimer wrote:
* John Denker:
 Florian Weimer wrote:

 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection,

 No, I wouldn't.
Not even for the public parameters?
Am I understanding correctly? Does SSL/TLS really generate a new P and G 
for each connection? If so, can someone explain the rationale behind this? 
It seems insane to me. And not doing so would certainly ease the problem on 
the entropy pool, not to mention CPU load for primality testing.

I must be misunderstanding. Surely. Please?
Greg.

Greg RoseINTERNET: [EMAIL PROTECTED]
Qualcomm Incorporated VOICE: +1-858-651-5733   FAX: +1-858-651-5766
5775 Morehouse Drivehttp://people.qualcomm.com/ggr/
San Diego, CA 92121   232B EC8F 44C6 C853 D68F E107 E6BF CD2F 1081 A37C
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Florian Weimer
* Victor Duchovni:

 The third mode is quite common for STARTTLS with SMTP if I am not
 mistaken. A one day sample of inbound TLS email has the following cipher
 frequencies:

 8221(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
 6529(using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits))

The Debian folks have recently stumbled upon a problem in this area:
Generating the ephemeral DH parameters is expensive, in terms of CPU
cycles, but especailly in PRNG entropy.  The PRNG part means that it's
not possible to use /dev/random on Linux, at least on servers.  The
CPU cycles spent on bignum operations aren't a real problem.

Would you recommend to switch to /dev/urandom (which doesn't block if
the entropy estimate for the in-kernel pool reaches 0), and stick to
generating new DH parameters for each connection, or is it better to
generate them once per day and use it for several connections?

(There's a second set of parameters related to the RSA_EXPORT mode in
TLS, but I suppose it isn't used much, and supporting it is not a top
priority.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread John Denker
Florian Weimer wrote:
Would you recommend to switch to /dev/urandom (which doesn't block if
the entropy estimate for the in-kernel pool reaches 0), and stick to
generating new DH parameters for each connection, 
No, I wouldn't.
 or ...
generate them once per day and use it for several connections?
I wouldn't do that, either.

If the problem is a shortage of random bits, get more random bits!
Almost every computer sold on the mass market these days has a sound
system built in. That can be used to generate industrial-strength
randomness at rates more than sufficient for the applications we're
talking about.  (And if you can afford to buy a non-mass-market
machine, you can afford to plug a sound-card into it.)
For a discussion of the principles of how to get arbitrarily close
to 100% entropy density, plus working code, see:
  http://www.av8n.com/turbid/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Victor Duchovni
On Sun, Dec 19, 2004 at 05:24:59PM +0100, Florian Weimer wrote:

 * Victor Duchovni:
 
  The third mode is quite common for STARTTLS with SMTP if I am not
  mistaken. A one day sample of inbound TLS email has the following cipher
  frequencies:
 
  8221(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
  6529(using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits))
 
 The Debian folks have recently stumbled upon a problem in this area:
 Generating the ephemeral DH parameters is expensive, in terms of CPU
 cycles, but especailly in PRNG entropy.  The PRNG part means that it's
 not possible to use /dev/random on Linux, at least on servers.  The
 CPU cycles spent on bignum operations aren't a real problem.
 
 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection, or is it better to
 generate them once per day and use it for several connections?
 

Actually reasoning along these lines is why Lutz Jaenicke implemented
PRNGD, it is strongly recommended (at least by me) that mail servers
use PRNGD or similar.  PRNGD delivers psuedo-random numbers mixing in
real entropy periodically.

EGD, /dev/random and /dev/urandom don't produce bits fast enough. Also
Postfix internally seeds the built-in OpenSSL PRNG via the tlsmgr process
and this hands out seeds for smtp servers and clients, so the demand for
real entropy is again reduced.

Clearly a PRNG is a compromise (if the algorithm is found to be weak we
could have problems), but real entropy is just too expensive.

I use PRNGD.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Florian Weimer
* Victor Duchovni:

 The Debian folks have recently stumbled upon a problem in this area:
 Generating the ephemeral DH parameters is expensive, in terms of CPU
 cycles, but especailly in PRNG entropy.  The PRNG part means that it's
 not possible to use /dev/random on Linux, at least on servers.  The
 CPU cycles spent on bignum operations aren't a real problem.
 
 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection, or is it better to
 generate them once per day and use it for several connections?
 

 Actually reasoning along these lines is why Lutz Jaenicke implemented
 PRNGD, it is strongly recommended (at least by me) that mail servers
 use PRNGD or similar.  PRNGD delivers psuedo-random numbers mixing in
 real entropy periodically.

 EGD, /dev/random and /dev/urandom don't produce bits fast enough.

Is this the only criticism of /dev/urandom (on Linux, at least)?  Even
on ancient hardware (P54C at 200 MHz), I can suck about 150 kbps out
of /dev/urandom, which is more than enough for our purposes.  (It's
not a web server, after all.)

I'm slightly troubled by claims such as this one:

  http://lists.debian.org/debian-devel/2004/12/msg01950.html

I know that Linux' /dev/random implementation has some problems (I
believe that the entropy estimates for mouse movements are a bit
unrealistic, somewhere around 2.4 kbps), but the claim that generating
session keys from /dev/urandom is a complete no-no is rather
surprising.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-05 Thread Dirk-Willem van Gulik


On Wed, 1 Dec 2004, Anne  Lynn Wheeler wrote:

 the other attack is on the certification authorities business process

Note that in a fair number of Certificate issuing processes common in
industry the CA (sysadmin) generates both the private key -and-
certificate, signs it and then exports both to the user their PC (usually
as part of a VPN or Single Sing on setup). I've seen situations more than
once where the 'CA' keeps a copy of both on file. Generally to ensure that
after the termination of an employeee or the loss of a laptop things 'can
be set right' again.

Suffice to say that this makes evesdropping even easier.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-05 Thread Anton Stiglic
This sounds very confused.  Certs are public.  How would knowing a copy
of the server cert help me to decrypt SSL traffic that I have intercepted?

I found allot of people mistakenly use the term certificate to mean
something like a pkcs12 file containing public key certificate and private
key.  Maybe if comes from crypto software sales people that oversimplify or
don't really understand the technology.  I don't know, but it's a rant I
have.  

Now if I had a copy of the server's private key, that would help, but such
private keys are supposed to be closely held.

Or are you perhaps talking about some kind of active man-in-the-middle
attack, perhaps exploiting DNS spoofing?  It doesn't sound like it, since
you mentioned passive sniffing.

I guess the threat would be something like an adversary getting access to a
web server, getting a hold of the private key (which in most cases is just
stored in a file, allot of servers need to be bootable without intervention
as well so there is a password somewhere in the clear that allows one to
unlock the private key), and then using it from a distance, say on a router
near the server where the adversary can sniff the connections.  A malicious
ISP admin could pull off something like that, law authority that wants to
read your messages, etc.

Is that a threat worth mentioning?  Well, it might be.  In any case,
forward-secrecy is what can protect us here.  Half-certified (or fully
certified) ephemeral Diffie-Hellman provides us with that property.

Of course, if someone could get the private signature key, he could then do
a man-in-the-middle attack and decrypt all messages as well.  It wouldn't
really be that harder to pull off.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-05 Thread Anne Lynn Wheeler
Anton Stiglic wrote:
I found allot of people mistakenly use the term certificate to mean
something like a pkcs12 file containing public key certificate and private
key.  Maybe if comes from crypto software sales people that oversimplify or
don't really understand the technology.  I don't know, but it's a rant I
have.  
 

i just had went off on possibly similar rant in comp.security.ssh where 
a question was posed about password
or certficate
http://www.garlic.com/~lynn/2004p.html#60
http://www.garlic.com/~lynn/2004q.html#0

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-01 Thread Ben Nagy
OK, Ian and I are, rightly or wrongly, on the same page here. Obviously my
choice of the word certificate has caused confusion.

[David Wagner]
 This sounds very confused.  Certs are public.  How would 
 knowing a copy
 of the server cert help me to decrypt SSL traffic that I have 
 intercepted?

Yes, sorry, what I _meant_ was the whole certificate file, PFX style, also
containing private keys. I assure you, I'm not confused, just perhaps guilty
of verbal shortcuts. I should, perhaps, have not characterised myself as
'bumbling enthusiast', to avoid the confusion with 'idiot'. :/

[...]
 Ian Grigg writes:
 I note that disctinction well!  Certificate based systems
 are totally vulnerable to a passive sniffing attack if the
 attacker can get the key.  Whereas Diffie Hellman is not,
 on the face of it.  Very curious...
 
 No, that is not accurate.  Diffie-Hellman is also insecure if 
 the private
 key is revealed to the adversary.  The private key for 
 Diffie-Hellman
 is the private exponent.

No, I'm not talking about escrowing DH exponents. I'm talking about modes
like in IPSec-IKE where there is a signed DH exchange using ephemeral DH
exponents - this continues to resist passive sniffing if the _signing_ keys
have somehow been compromised, unless I have somehow fallen on my head and
missed something.

 Perhaps the distinction you had in mind is forward secrecy.

Yes and no. Forward secrecy is certainly at the root of my question, with
regards to the RSA modes not providing it and certain of the DH modes doing
so. :)

Thanks!

ben
  


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-01 Thread ben
 -Original Message-
 From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 01, 2004 7:01 AM
 To: [EMAIL PROTECTED]
 Cc: Ben Nagy; [EMAIL PROTECTED]
 Subject: Re: SSL/TLS passive sniffing
 
 Ian Grigg [EMAIL PROTECTED] writes:
[...]
  However could one do a Diffie Hellman key exchange and do this
  under the protection of the public key? [...]
 
 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.
 
 Try googling for station to station protocol
 
 -Ekr

Right. And my original question was, why can't we do that one-sided with
SSL, even without a certificate at the client end? In what ways would that
be inferior to the current RSA suites where the client encrypts the PMS
under the server's public key.

Eric's answer seems to make the most sense - I guess generating the DH
exponent and signing it once per connection server-side would be a larger
performance hit than I first thought, and no clients care.

Thanks for all the answers, on and off list. ;)

Cheers,

ben



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-01 Thread Eric Rescorla
[EMAIL PROTECTED] writes:

 -Original Message-
 From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 01, 2004 7:01 AM
 To: [EMAIL PROTECTED]
 Cc: Ben Nagy; [EMAIL PROTECTED]
 Subject: Re: SSL/TLS passive sniffing
 
 Ian Grigg [EMAIL PROTECTED] writes:
 [...]
  However could one do a Diffie Hellman key exchange and do this
  under the protection of the public key? [...]
 
 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.
 
 Try googling for station to station protocol
 
 -Ekr

 Right. And my original question was, why can't we do that one-sided with
 SSL, even without a certificate at the client end? In what ways would that
 be inferior to the current RSA suites where the client encrypts the PMS
 under the server's public key.

Just to be completely clear, this is exactly whatthey 
TLS_RSA_DHE_* ciphersuites currently do, so it's purely a matter
of configuration and deployment.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
Ben raises an interesting thought:

 There was some question about whether this is possible for connections that
 use client-certs, since it looks to me from the spec that those connections
 should be using one of the Diffie Hellman cipher suites, which is obviously
 not vulnerable to a passive sniffing 'attack'. Active 'attacks' will
 obviously still work. Bear in mind that we're talking about deliberate
 undermining of the SSL connection by organisations, usually against their
 website users (without talking about the goodness, badness or legality of
 that), so how do they get the private keys isn't relevant.

We have the dichotomy that DH protects against all passive
attacks, and a signed cert protects against most active attacks,
and most passive attacks, but not passive attacks where the
key is leaked, and not active attacks where the key is
forged (as a cert).

But we do not use both DH and certificates at the same time,
we generally pick one or the other.

Could we however do both?

In the act of a public key protected key exchange, Alice
generally creates a random key and encrypts that to Bob's
public key.  That random then gets used for further traffic.

However could one do a Diffie Hellman key exchange and do this
under the protection of the public key?  In which case we are
now protected from Bob aggressively leaking the public key.
(Or, to put it more precisely, Bob would now have to record
and leak all his traffic as well, which is a substantially
more expensive thing to engage in.)

(This still leaves us with the active attack of a forged
key, but that is dealt with by public key (fingerprint)
caching.)

Does that make sense?  The reason I ask is that I've just
written a new key exchange protocol element, and I thought
I was being clever by having both Bob and Alice provide
half the key each, so as to protect against either party
being non-robust with secret key generation.  (As a programmer
I'm more worried about the RNG clagging than the key leaking,
but let's leave that aside for now...)

Now I'm wondering whether the key exchange should do a DH
within the standard public key protected key exchange?
Hmmm, this sounds like I am trying to do PFS  (perfect
forward secrecy).  Any thoughts?

iang


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
 Ian Grigg writes:
I note that disctinction well!  Certificate based systems
are totally vulnerable to a passive sniffing attack if the
attacker can get the key.  Whereas Diffie Hellman is not,
on the face of it.  Very curious...

 No, that is not accurate.  Diffie-Hellman is also insecure if the private
 key is revealed to the adversary.  The private key for Diffie-Hellman
 is the private exponent.  If you learn the private exponent that one
 endpoint used for a given connection, and if you have intercepted that
 connection, you can derive the session key and decrypt the intercepted
 traffic.

I wasn't familiar that one could think in those terms.  Reading
here:  http://www.rsasecurity.com/rsalabs/node.asp?id=2248 it
says:

In recent years, the original Diffie-Hellman protocol
has been understood to be an example of a much more
general cryptographic technique, the common element
being the derivation of a shared secret value (that
is, key) from one party's public key and another
party's private key. The parties' key pairs may be
generated anew at each run of the protocol, as in
the original Diffie-Hellman protocol.

It seems the compromise of *either* exponent would lead to
solution.

 Perhaps the distinction you had in mind is forward secrecy.  If you use
 a different private key for every connection, then compromise of one
 connection's private key won't affect other connections.  This is
 true whether you use RSA or Diffie-Hellman.  The main difference is
 that in Diffie-Hellman, key generation is cheap and easy (just an
 exponentiation), while in RSA key generation is more expensive.

Yes.  So if a crypto system used the technique of using
Diffie-Hellman key exchange (with unique exponents for each
session), there would be no lazy passive attack, where I
am defining the lazy attack as a once-off compromise of a
private key.  That is, the attacker would still have to
learn the individual exponent for that session, which
(assuming the attacker has to ask for it of one party)
would be equivalent in difficulty to learning the secret
key that resulted and was used for the secret key cipher.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL accel cards

2004-05-26 Thread Jun-ichiro itojun Hagino
 Does anyone know of an SSL acceleration card that actually works under
 Linux/*BSD? I've been looking at vendor web pages (AEP, Rainbow, etc), and
 while they all claim to support Linux, Googling around all I find are people
 saying Where can I get drivers? The ones vendor shipped only work on RedHat
 5.2 with a 2.0.36 kernel. (or some similar 4-6 year old system), and certainly
 they don't (gasp) make updated versions available for download. Because someone
 might... what, steal the driver? Anyway...

with openbsd, http://www.openbsd.org/crypto.html#hardware

itojun

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL accel cards

2004-05-26 Thread Anton Stiglic

 Does anyone know of an SSL acceleration card that actually works under
 Linux/*BSD?

I successfully used a Broadcom PCI card on a Linux (don't remember
what Linux and kernel version, this was close to 2 years ago).
If I remember correctly it was the BCM5820 processor I used
http://www.broadcom.com/collateral/pb/5820-PB04-R.pdf
(the product sheet mentions support for Linux, Win98, Win2000,
FreeBSD, VxWorks, Solaris).

I was able to use it on a Linux and on a Windows (where I offloaded
modexp operation from MSCAPI crypto provider).

The Linux drivers where available from Broadcom upon request, there was
also a crypto library that called the card via the drivers, but at the time
I looked at it the code wasn't very stable (e.g. I had to debug the RSA
key generation and send patches since it did not work at all, later versions
had the key generation part working properly).
The library might be stable by now.

I also made the Broadcom chip work with OpenCryptoki on a Linux,
I submitted the code for supporting Broadcom in OpenCryptoki.

http://www-124.ibm.com/developerworks/oss/cvs/opencryptoki/

 []
 and certainly
 they don't (gasp) make updated versions available for download. Because
someone
 might... what, steal the driver? Anyway...
 []

No, but they might find out how poorly written they are??? Don't know the
reason...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL accel cards

2004-05-25 Thread Grant Goodale
We've had great luck with the nFast and nForce lines of ssl
accelerators from nCipher under Red Hat:

http://www.ncipher.com

Depending on which model you choose, you can get anywhere from
150 to 1600 key ops/sec.  

HTH,

G
--
Grant Goodale
Security Architect
Reactivity, Inc.
[EMAIL PROTECTED]
http://www.reactivity.com

 -Original Message-
 From: Jack Lloyd [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, May 23, 2004 10:34 AM
 To: [EMAIL PROTECTED]
 Subject: SSL accel cards
 
 
 Does anyone know of an SSL acceleration card that actually 
 works under Linux/*BSD? I've been looking at vendor web pages 
 (AEP, Rainbow, etc), and while they all claim to support 
 Linux, Googling around all I find are people saying Where 
 can I get drivers? The ones vendor shipped only work on RedHat
 5.2 with a 2.0.36 kernel. (or some similar 4-6 year old 
 system), and certainly they don't (gasp) make updated 
 versions available for download. Because someone might... 
 what, steal the driver? Anyway...
 
 What I'm specifically looking for is a PCI card that can do 
 fast modexp, and that I can program against on a Linux/*BSD 
 box. Onboard DES/AES/SHA-1/whatever would be fun to play with 
 but not extremely important.
 
 -Jack
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to 
 [EMAIL PROTECTED]
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Ian Grigg
Tom Weinstein wrote:

 The economic view might be a reasonable view for an end-user to take,
 but it's not a good one for a protocol designer. The protocol designer
 doesn't have an economic model for how end-users will end up using the
 protocol, and it's dangerous to assume one. This is especially true for
 a protocol like TLS that is intended to be used as a general solution
 for a wide range of applications.


I agree with this.  Especially, I think we are
all coming to the view that TLS/SSL is in fact
a general purpose channel security protocol,
and should not be viewed as being designed to
protect credit cards or e-commerce especially.

Given this, it is unreasonable to talk about
threat models at all, when discussing just the
protocol.  I'm coming to the view that protocols
don't have threat models, they only have
characteristics.  They meet requirements, and
they get deployed according to the demands of
higher layers.

Applications have threat models, and in this is
seen the mistake that was made with the ITM.
Each application has to develop its own threat
model, and from there, its security model.

Once so developed, a set of requirements can
be passed on to the protocol.  Does SSL/TLS
meet the requirements passed on from on high?
That of course depends on the application and
what requirements are set.

So, yes, it is not really fair for a protocol
designer to have to undertake an economic
analysis, as much as they don't get involved
in threat models and security models.  It's
up to the application team to do that.

Where we get into trouble a lot in the crypto
world is that crypto has an exaggerated
importance, an almost magical property of
appearing to make everything safe.  Designers
expect a lot from cryptographers for these
reasons.  Too much, really.  Managers demand
some special sprinkling of crypto fairy dust
because it seems to make the brochure look
good.

This will always be a problem.  Which is why
it's important for the crypto guy to ask the
question - what's *your* threat model?  Stick
to his scientific guys, as it were.


 In some ways, I think this is something that all standards face. For any
 particular application, the standard might be less cost effective than a
 custom solution. But it's much cheaper to design something once that
 works for everyone off the shelf than it would be to custom design a new
 one each and every time.


Right.  It is however the case that secure
browsing is facing a bit of a crisis in
security.  So, there may have to be some
changes, one way or another.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Peter Gutmann
Perry E. Metzger [EMAIL PROTECTED] writes:

TLS is just a pretty straightforward well analyzed protocol for protecting a
channel -- full stop. It can be used in a wide variety of ways, for a wide
variety of apps. It happens to allow you to use X.509 certs, but if you
really hate X.509, define an extension to use SPKI or SSH style certs. TLS
will accommodate such a thing easily. Indeed, I would encourage you to do
such a thing.

Actually there's no need to even extend TLS, there's a standard and very
simple technique which is probably best-known from its use in SSH but has been
in use in various other places as well:

1. The first time your server fires up, generate a self-signed cert.

2. When the user connects, have them verify the cert out-of-band via its
   fingerprint.  Even a lower-security simple phrase or something derived from
   the fingerprint is better than nothing.

3. For subsequent connections, warn if the cert fingerprint has changed.

That's currently being used by a number of TLS-using apps, and works at least
as well as any other mechanism.  At a pinch, you can even omit (2) and just
warn if a key that doesn't match the one first encountered is used, that'll
catch everything but an extremely consistent MITM.  Using something like SSH
keys isn't going to give you any magical security that X.509 certs doesn't,
you'll just get something equivalent to the above mechanism.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic

- Original Message - 
From: Tom Otvos [EMAIL PROTECTED]

 As far as I can glean, the general consensus in WYTM is that MITM attacks
are very low (read:
 inconsequential) probability.

I'm not certain this was the consensus.

We should look at the scenarios in which this is possible, and the tools
that
are available to accomplish the attack.  I would say that the attack is more
easily done inside a local network (outside the network you have to get
control
of the ISP or some node, and this is more for the elite).
But statistics show that most exploits are accomplished because of employees
within a company (either because they are not aware of basic security
principals,
or because the malicious person was an employee within), so I find this
scenario
(attack from inside the network) to be plausible.

Take for an example a large corporation of 100 or more employees, there has
got to be a couple of people that do on-line purchasing from work, on-line
banking, etc...  I would say that it is possible that an employee (just
curious, or
really malicious) would want to intercept these communications

So how difficult is it to launch an MITM attack on https?  Very simple it
seems.  My hacker friends pointed out to me two softwares, ettercap and
Cain:
http://ettercap.sourceforge.net/
http://www.oxid.it/cain.html

Cain is the newest I think, and remarkably simple to use.  It has a very
nice
GUI and it doesn't take much hacking ability to use it.  I've been using it
recently for educational purposes and find it very easy to use, and I don't
consider myself a hacker.

Cain allows you to do MITM (in HTTPS, DNS and SSHv1) on a local
network.  It can generate certificates in real time with the same common
name as the original.  The only thing is that the certificate will probably
not
be signed by a trusted CA, but most users are not security aware and
will just continue despite the warning.

So given this information, I think MITM threats are real.  Are these attacks
being done in practice?  I don't know, but I don't think they would easily
be reported if they were, so you  can guess what my conclusion is...

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anne Lynn Wheeler
Internet groups starts anit-hacker initiative
http://www.computerweekly.com/articles/article.asp?liArticleID=125823liArti 
cleTypeID=1liCategoryID=2liChannelID=22liFlavourID=1sSearch=nPage=1

one of the threats discussed in the above is the domain name ip-address 
take-over mentioned previously
http://www.garlic.com/~lynn/aadsm15.htm#28

which was one of the primary justifications supposedly for SSL deployment 
(am i really talking to the server that I think i'm talking to).
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread David Honig
At 07:11 PM 10/22/03 -0400, Perry E. Metzger wrote:

Indeed. Imagine if we waited until airplanes exploded regularly to
design them so they would not explode, or if we had designed our first
suspension bridges by putting up some randomly selected amount of
cabling and seeing if the bridge collapsed. That's not how good
engineering works.

No.  But how quickly we forget how many planes *did* break up,
how many bridges *did* fall apart, because engineering sometimes
goes into new territory.

Even now.  You start using new composite materials in planes, and wonder why
they fall out of the sky when their tails snap off.  
Eventually (though not yet) Airbus et al
will get a clue how they fail differently from familiar metals.  
Even learning about now-mundane metal fatigue in planes involved
breakups and death.

(Safety) engineering *is* (unfortunately, but perhaps by practical necessity)
somewhat reactive.  It tries very hard not to be, but it is.

dh





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-11-12 Thread Anton Stiglic
 I'm not sure how you come to that conclusion.  Simply
 use TLS with self-signed certs.  Save the cost of the
 cert, and save the cost of the re-evaluation.
 
 If we could do that on a widespread basis, then it
 would be worth going to the next step, which is caching
 the self-signed certs, and we'd get our MITM protection
 back!  Albeit with a bootstrap weakness, but at real
 zero cost.

I know of some environments where this is done.  For example
to protect the connection to a corporate mail server, so that 
employees can read their mail from outside of work.  The caching 
problem is easily solved in this case by having the administrator 
distribute the self-signed cert to all employees and having them 
import it and trust it.  This costs no more than 1 man day per year.

This is near 0 cost however, and gives some weight to Perry's
argument.

 Any merchant who wants more, well, there *will* be
 ten offers in his mailbox to upgrade the self-signed
 cert to a better one.  Vendors of certs may not be
 the smartest cookies in the jar, but they aren't so
 dumb that they'll miss the financial benefit of self-
 signed certs once it's been explained to them.

I have a hard time believing that a merchant (who plans
to make $ by providing the possibility to purchase on-line)
cannot spend something like 1000$ [1] a year for an SSL 
certificate, and that the administrator is not capable of 
properly installing it within 1-2 man days.  If he can't install
it, just get a consultant to do it, you can probably get one
that does it within a day and charges no more than 1000$.

So that would make the total around 2000$ a year, let's 
generously round it up to 10K$ annum.
I think your 10-100 million $ annum estimate is a bit 
exaggerated...


[1] this is the price I saw at Verisign
http://www.verisign.com/products/site/commerce/index.html
I'm sure you can get it for cheaper. This was already 
discussed on this list I think...

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-23 Thread David Wagner
Thor Lancelot Simon  wrote:
Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?

Sure.  If I can assume you're talking about SSL/https as it is
typically used in ecommerce today, that's easy.  Subvert DNS to
redirect the user to a site under controller of the attacker.
Then it doesn't matter whether the legitimate site has a valid server
cert or not.  Is this the kind of scenario you were looking for?

http://lists.insecure.org/lists/bugtraq/1999/Nov/0202.html

Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Gonna make me work harder on this one, eh?  Well, ok, I'll give it a try.
Here's one possible way that you might be able to use client certs to
help (assuming client certs were usable and well-supported by browsers).
Beware: I'm making this one up as I go, so it's entirely possible there
are security flaws with my proposal; I'd welcome feedback.

When I establish a credit card with Visa, I generate a new client
certificate for this purpose and register it with www.visa.com.  When I
want to buy a fancy hat from www.amazon.com, Amazon re-directs me to
  https://ssl.visa.com/buy.cgi?payto=amazonamount=$29.99item=hat
My web browser opens a SSL channel to Visa's web server, authenticating my
presence using my client cert.  Visa presents me a description of the item
Amazon claims I want to buy, and asks me to confirm the request over that
authenticated channel.  If I confirm it, Visa forwards payment to Amazon
and debits my account.  Visa can tell whose account to debit by looking
at the mapping between my client certs and account numbers.  If Amazon
wants to coordinate, it can establish a separate secure channel with Visa.
(Key management for vendors is probably easier than for customers.)

I can't see any MITM attacks against this protocol.  The crucial point is
that Visa will only initiate payment if it receives confirmation from me,
over a channel where Visa has authenticated that I'm on the other end,
to do so.  A masquerading server doesn't learn any secrets that it can
use to authorize bogus transactions.

Does this work?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Otvos wrote:

 As far as I can glean, the general consensus in WYTM is that MITM attacks are very 
 low (read:
 inconsequential) probability.  Is this *really* true?


The frequency of MITM attacks is very low, in the sense
that there are few or no reported occurrences.  This
makes it a challenge to respond to in any measured way.


 I came across this paper last year, at the
 SANS reading room:
 
 http://rr.sans.org/threats/man_in_the_middle.php
 
 I found it both fascinating and disturbing, and I have since confirmed much of what 
 it was
 describing.  This leads me to think that an MITM attack is not merely of academic 
 interest but one
 that can occur in practice.


Nobody doubts that it can occur, and that it *can*
occur in practice.  It is whether it *does* occur
that is where the problem lies.

The question is one of costs and benefits - how much
should we spend to defend against this attack?  How
much do we save if we do defend?

[ Mind you, the issues that are raised by the paper
are to do with MITM attacks, when SSL/TLS is employed
in an anti-MITM role.  (I only skimmed it briefly I
could be wrong.)  We in the SSL/TLS/secure browsing
debate have always assumed that SSL/TLS when fully
employed covers that attack - although it's not the
first time I've seen evidence that the assumption
is unwarranted. ]


 Having said that then, I would like to suggest that one of the really big flaws in 
 the way SSL is
 used for HTTP is that the server rarely, if ever, requires client certs.  We all 
 seem to agree that
 convincing server certs can be crafted with ease so that a significant portion of 
 the Web population
 can be fooled into communicating with a MITM, especially when one takes into account 
 Bruce
 Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
 O'Whielacronx).  But
 as long as servers do *no* authentication on client certs (to the point of not even 
 asking for
 them), then the essential handshaking built into SSL is wasted.
 
 I can think of numerous online examples where requiring client certs would be a good 
 thing: online
 banking and stock trading are two examples that immediately leap to mind.  So the 
 question is, why
 are client certs not more prevalent?  Is is simply an ease of use thing?


I think the failure of client certs has the same
root cause as the failure of SSL/TLS to branch
beyond its mandated role of protecting e-
commerce.  Literally, the requirement that
the cert be supplied (signed) by a third party
killed it dead.  If there had been a button on
every browser that said generate self-signed
client cert now then the whole world would be
using them.

Mind you, the whole client cert thing was a bit
of an afterthought, wasn't it?  The orientation
that it was at server discretion also didn't help.


 Since the Internet threat
 model upon which SSL is based makes the assumption that the channel is *not* 
 secure, why is MITM
 not taken more seriously?


People often say that there are no successful MITM
attacks because of the presence of SSL/TLS !

The existance of the bugs in Microsoft browsers
puts the lie to this - literally, nobody has bothered
with MITM attacks, simply because they are way way
down on the average crook's list of sensible things
to do.

Hence, that rant was in part intended to separate
out 1994's view of threat models to today's view
of threat models.  MITM is simply not anywhere in
sight - but a whole heap of other stuff is!

So, why bother with something that isn't a threat?
Why can't we spend more time on something that *is*
a threat, one that occurs daily, even hourly, some
times?


 Why, if SSL is designed to solve a problem that can be solved, namely
 securing the channel (and people are content with just that), are not more people 
 jumping up and
 down yelling that it is being used incorrectly?


Because it's not necessary.  Nobody loses anything
much over the wire, that we know of.  There are
isolated cases of MITMs in other areas, and in
hacker conferences for example.  But, if 10 bit
crypto and ADH was used all the time, it would
still be the least of all risks.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 So what purpose would client certificates address? Almost all of the use
 of SSL domain name certs is to hide a credit card number when a consumer
 is buying something. There is no requirement for the merchant to
 identify and/or authenticate the client  the payment infrastructure
 authenticates the financial transaction and the server is concerned
 primarily with getting paid (which comes from the financial institution)
 not who the client is.


The CC number is clearly not hidden if there is a MITM.  I think the I got my money 
so who cares
where it came from argument is not entirely a fair representation.  Someone ends up 
paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at all?  But that 
is besides
the point.

 So, there are some infrastructures that have web servers that want to
 authenticate clients (for instance online banking). They currently
 establish the SSL session and then authenticate the user with
 userid/password against an online database.


These are, I think, more important examples and again, if there is a MITM, then doing 
additional
authentication post-channel setup is irrelevant. These can be easily replayed after 
the attack has
completed.  The authentication *should* be deeply tied to channel setup, should it 
not?  Or stated
another way, having chained authentication where the first link in the chain is 
demonstrably weak
doesn't seem to achieve an awful lot.


 There was an instance of a bank issuing client certificates for use in
 online banking. At one time they claimed to have the largest issued PKI
 client certificates (aka real PKI as opposed to manufactured
 certificates).

 However, they discovered

 1) the certificates had to be reduced back to relying-party-only
 certificates with nothing but an account number (because of numerous
 privacy and liability concerns)

 2) the certificates quickly became stale

 3) they had to look up the account and went ahead and did a separate
 password authentication  in part because the certificates were
 stale.

 They somewhat concluded that the majority of client certificate
 authentication aren't being done because they want the certificates 
 it is because the available COTS software implements it that way (if you
 want to use public key) ... but not because certificates are in anyway
 useful to them (in fact, it turns out that the certificates are
 redundant and superfluous ... and because of the staleness issue
 resulted in them also requiring passwords).


Fascinating!  Can you please tell me what bank that was?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread John S. Denker
On 10/22/2003 04:33 PM, Ian Grigg wrote:

 The frequency of MITM attacks is very low, in the sense that there
 are few or no reported occurrences.
We have a disagreement about the facts on this point.
See below for details.
 This makes it a challenge to
 respond to in any measured way.
We have a disagreement about the philosophy of how to
measure things.  One should not design a bridge according
to a simple measurement of the amount of cross-river
traffic in the absence of a bridge.  One should not approve
a launch based on the observed fact that previous instances
of O-ring failures were non-fatal.
Designers in general, and cryptographers in particular,
ought to be proactive.
But this philosophy discussion is a digression, because
we have immediate practical issues to deal with.
 Nobody doubts that it can occur, and that it *can* occur in practice.
 It is whether it *does* occur that is where the problem lies.
According to the definitions I find useful, MITM is
basically a double impersonation.  For example,
Mallory impersonates PayPal so as to get me to
divulge my credit-card details, and then impersonates
me so as to induce my bank to give him my money.
This threat is entirely within my threat model.  There
is nothing hypothetical about this threat.  I get 211,000
hits from
  http://www.google.com/search?q=ID-theft
SSL is distinctly less than 100% effective at defending
against this threat.  It is one finger in a dike with
multiple leaks.  Client certs arguably provide one
additional finger ... but still multiple leaks remain.
==

The expert reader may have noticed that there are
other elements to the threat scenario I outlined.
For instance, I interact with Mallory for one seemingly
trivial transaction, and then he turns around and
engages in numerous and/or large-scale transactions.
But this just means we have more than one problem.
A good system would be robust against all forms
of impersonation (including MITM) *and* would be
robust against replays *and* would ensure that
trivial things and large-scale things could not
easily be confused.  Et cetera.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:08 PM 10/22/2003 -0400, Tom Otvos wrote:

The CC number is clearly not hidden if there is a MITM.  I think the I 
got my money so who cares
where it came from argument is not entirely a fair 
representation.  Someone ends up paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at 
all?  But that is besides
the point.
the statement was SSL domain name certificate is

1) am i really talking to who I think I'm talking to
2) encrypted channel
obviously #1 addresses MITM (am i really talking to who I think I'm talking 
to).

The issue for CC is that it really is a shjared secret and is extremely 
vulnerable ... as I've commented before

1) CC needs to be in the clear in a dozen or so business processes
2) much simpler to harvest a whole merchant file with possibly millions of 
CC numbers in about the same effort to evesdrop one off the net (even if 
there was no SSL) return on investment  for approx. same amount of 
effort get one CC number or get millions
3) all the instances in the press are in fact involved with harvesting 
large files of numbers ... not one or two at a time off the wire
4) burying the earth in miles of crypto still wouldn't eliminate the 
current shared-secret CC problem

slightly related  security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61
so the requirement given the X9 financial standards working group X9A10
http://www.x9.org/
was to preserve the integrity of the financial infrastructure for all 
electronic retail payment (regardless of kind, origin, method, etc). The 
result was X9.59 standard
http://www.garlic.com/~lynn/index.html#x959

which effectively defines a digitally signed, authenticated transaction 
 no certificate required ... and the CC number used in X9.59 
authenticated transactions shouldn't be used in non-authenticated 
transactions. Since the transaction is now digitally signed transactions 
and the CC# can't be used in non-authenticated transactions  you can 
listen in on X9.59 transactions and harvest all the CC# that you want to 
and it doesn't help with doing fraudulent transactions. In effect, X9.59 
changes the business rules so that CC# no longer need to be treated as 
shared secrets.

misc. past stuff about ssl domain name certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert
misc. past stuff about relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo
misc. past stuff about using certificateless digital signatures in radius 
authentication
http://www.garlic.com/~lynn/subpubkey.html#radius

misc. past stuff about using certificateless digital signatures in kerberos 
authentication
http://www.garlic.com/~lynn/subpubkey.html#kerberos

misc. fraud  exploits (including some number of cc related press 
announcements)
http://www.garlic.com/~lynn/subtopic.html#fraud

some discussion of early SSL deployment for what is now referred to as 
electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.


Or, whether it gets reported if it does occur.

 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?


Absolutely true.  If the only effect of a MITM is loss of privacy, then that is 
certainly a
lower-priority item to fix than some quick cash scheme.  So the threat model needs 
to clearly
define who the bad guys are, and what their motivations are.  But then again, if I am 
the victim of
a MITM attack, even if the bad guy did not financially gain directly from the attack 
(as in, getting
my money or something for free), I would consider loss of privacy a significant 
thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by buying a bunch 
of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an attack that 
affects
millions of people, but to the person involved, it is pretty serious.  Shouldn't the 
server in
this case help mitigate this type of attack?


 So, why bother with something that isn't a threat?
 Why can't we spend more time on something that *is*
 a threat, one that occurs daily, even hourly, some
 times?


I take your point, but would suggest isn't a threat be replaced by doesn't threaten 
the
majority.  And are we at a point where it needs to be a binary thing -- fix this OR 
that but NOT
both?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread David Wagner
Tom Otvos wrote:
As far as I can glean, the general consensus in WYTM is that MITM
attacks are very low (read:
inconsequential) probability.  Is this *really* true?

I'm not aware of any such consensus.
I suspect you'd get plenty of debate on this point.
But in any case, widespread exploitation of a vulnerability
shouldn't be a prerequisite to deploying countermeasures.

If we see a plausible future threat and the stakes are high enough,
it is often prudent to deploy defenses in advance against the possibility
that attackers.  If we wait until the attacks are widespread, it may be
too late to stop them.  It often takes years (or possibly a decade or more:
witness IPSec) to design and widely deploy effective countermeasures.

It's hard to predict with confidence which of the many vulnerabilities
will be popular among attackers five years from now, and I've been very wrong,
in both directions, many times.  In recognition of our own fallibility at
predicting the future, the conclusion I draw is that it is a good idea
to be conservative.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.
 
 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?

I have to find I find this argument very odd.

You argue that TLS defends against man in the middle attacks, but that
we do not observe man in the middle attacks, so why do we need the
defense?

Well, we don't observe the attacks much because they are hard to
undertake. Make them easy and I am sure they would happen
frequently. Protocols subject to such attacks are frequently subjected
to them, and there are whole suites of tools you can download to help
you in intercepting traffic to facilitate them.

You argue that we have to make a cost/benefit analysis, but we're
talking about computer algorithms where the cost is miniscule if it
is measurable at all. Why should we use a second-best practice when a
best practice is in reality no more expensive?

It is one thing to argue that a bridge does not need another million
dollars worth of steel, but who can rationally argue that we should
use different, less secure algorithms when there is no obvious
benefit, either in computation, in development costs or in license
fees (since TLS is after all free of any such fees), and the
alternatives are less secure? In such a light, a cost/benefit analysis
leads inexorably to Use TLS -- second best saves nothing and might
cost a lot in lower security.

Some of your arguments seem to come down to there wasn't enough
thought given to the threat model. That might have been true when the
SSL/TLS process began, but a bunch of fairly smart people worked on
it, and we've ended up with a pretty solid protocol that is at worst
more secure than you might absolutely need but which covers the threat
model in most of the cases in which it might be used. You've yet to
argue that the threat model is insufficiently secure -- only that it
might be more than one needs -- so what is the harm?

Honestly the only really good argument against TLS I can think of is
that if one wants to use something like SSH keying instead of X.509
keying the detailed protocol doesn't support it very well, but the
protocol can be trivially adapted to do what one wants and the
underlying security model is almost exactly what one wants in a
majority of cases. Such an adaptation might be a fine idea, but it can
be done without giving up any of the fine analysis that went into TLS.

Actually, there is one other argument against TLS -- it does not
protect underlying TCP signaling the way that IPSec does. However,
given where it sits in the stack, you can't fault it for that.

 I think the failure of client certs has the same
 root cause as the failure of SSL/TLS to branch
 beyond its mandated role of protecting e-
 commerce.  Literally, the requirement that
 the cert be supplied (signed) by a third party
 killed it dead.  If there had been a button on
 every browser that said generate self-signed
 client cert now then the whole world would be
 using them.

This is not a failure of TLS. This is a failure of the browsers and
web servers. There is no reason browsers couldn't do exactly that,
tomorrow, and that sites couldn't operate on an SSH accept only what
you saw the first time model. TLS is fully capable of supporting that.

If you want to argue against X.509, that might be a fine and quite
reasonable argument. I would happily argue against lots of X.509
myself. However, X.509 is not TLS, and TLS's properties are not those
of X.509.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Thor Lancelot Simon
On Wed, Oct 22, 2003 at 05:08:32PM -0400, Tom Otvos wrote:
 
  So what purpose would client certificates address? Almost all of the use
  of SSL domain name certs is to hide a credit card number when a consumer
  is buying something. There is no requirement for the merchant to
  identify and/or authenticate the client  the payment infrastructure
  authenticates the financial transaction and the server is concerned
  primarily with getting paid (which comes from the financial institution)
  not who the client is.
 
 
 The CC number is clearly not hidden if there is a MITM.

Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?  Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

[EMAIL PROTECTED] (David Wagner) writes:
 Tom Otvos wrote:
 As far as I can glean, the general consensus in WYTM is that MITM
 attacks are very low (read:
 inconsequential) probability.  Is this *really* true?
 
 I'm not aware of any such consensus.

I will state that MITM attacks are hardly a myth. They're used by
serious attackers when the underlying protocols permit it, and I've
witnessed them in the field with my own two eyes. Hell, they're even
well enough standardized that I've seen them in use on conference
networks. Some such attacks have been infamous.

MITM attacks are not currently the primary means for stealing credit
card numbers these days both because TLS makes it harder to do MITM
attacks and thus it is usually easier just to break in to the poorly
defended web server and steal the card numbers directly. However, that
is not a reason to remove anti-MITM defenses from TLS -- it is in fact
a reason to think of them as a success.

 I suspect you'd get plenty of debate on this point.
 But in any case, widespread exploitation of a vulnerability
 shouldn't be a prerequisite to deploying countermeasures.

Indeed. Imagine if we waited until airplanes exploded regularly to
design them so they would not explode, or if we had designed our first
suspension bridges by putting up some randomly selected amount of
cabling and seeing if the bridge collapsed. That's not how good
engineering works.

 If we see a plausible future threat and the stakes are high enough,
 it is often prudent to deploy defenses in advance against the
 possibility that attackers.

This is especially true when the marginal cost of the defenses is near
zero. The design cost of the countermeasures was high, but once
designed they can be replicated with no greater expense than that of
any other protocol.

 It's hard to predict with confidence which of the many
 vulnerabilities will be popular among attackers five years from now,
 and I've been very wrong, in both directions, many times.  In
 recognition of our own fallibility at predicting the future, the
 conclusion I draw is that it is a good idea to be conservative.

Ditto.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Weinstein wrote:
 
 Ian Grigg wrote:
 
  Nobody doubts that it can occur, and that it *can* occur in practice.
  It is whether it *does* occur that is where the problem lies.
 
 This sort of statement bothers me.
 
 In threat analysis, you have to base your assessment on capabilities,
 not intentions. If an attack is possible, then you must guard against
 it. It doesn't matter if you think potential attackers don't intend to
 attack you that way, because you really don't know if that's true or not
 and they can always change their minds without telling you.

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.

This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.

(Of course, anecdotal evidence helps in that
respect, hence there is a lot of discussion
about MITMs in other forums.)

iang

Here's Eric Rescorla's words on this:

http://www.iang.org/ssl/rescorla_1.html

The first thing that we need to do is define our ithreat model./i
A threat model describes resources we expect the attacker to
have available and what attacks the attacker can be expected
to mount.  Nearly every security system is vulnerable to some
threat or another.  To see this, imagine that you keep your
papers in a completely unbreakable safe.  That's all well and
good, but if someone has planted a video camera in your office
they can see your confidential information whenever you take it
out to use it, so the safe hasn't bought you that much.

Therefore, when we define a threat model, we're concerned
not only with defining what attacks we are going to worry
about but also those we're not going to worry about.
Failure to take this important step typically leads to
complete deadlock as designers try to figure out how to
counter every possible threat.  What's important is to
figure out which threats are realistic and which ones we
can hope to counter with the tools available.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 In threat analysis, you base your assessment on
 economics of what is reasonable to protect.  It
 is perfectly valid to decline to protect against
 a possible threat, if the cost thereof is too high,
 as compared against the benefits.

The cost of MITM protection is, in practice, zero. Indeed, if you
wanted to produce an alternative to TLS without MITM protection, you
would have to spend lots of time and money crafting and evaluating a
new protocol that is still reasonably secure without that
protection. One might therefore call the cost of using TLS, which may
be used for free, to be substantially lower than that of an
alternative.

How low does the risk have to get before you will be willing not just
to pay NOT to protect against it? Because that is, in practice, what
you would have to do. You would actually have to burn money to get
lower protection. The cost burden is on doing less, not on doing
more.

There is, of course, also the cost of what happens when someone MITM's
you.

You keep claiming we have to do a cost benefit analysis, but what is
the actual measurable financial benefit of paying more for less
protection?

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:42 PM 10/22/2003 -0400, Tom Otvos wrote:

Absolutely true.  If the only effect of a MITM is loss of privacy, then 
that is certainly a
lower-priority item to fix than some quick cash scheme.  So the threat 
model needs to clearly
define who the bad guys are, and what their motivations are.  But then 
again, if I am the victim of
a MITM attack, even if the bad guy did not financially gain directly from 
the attack (as in, getting
my money or something for free), I would consider loss of privacy a 
significant thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by 
buying a bunch of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an 
attack that affects
millions of people, but to the person involved, it is pretty 
serious.  Shouldn't the server in
this case help mitigate this type of attack?


ok, the original SSL domain name certificate for what became electronic 
commerce was

1) am I really talking to the server that I think I'm talking to
2) encrypted session.
so the attack in #1 was plausably some impersonation ... either MITM or 
straight impersonation. The issue was that there was a perceived 
vulnerability in the domain name infrastructure that somebody could 
contaminate the domain name look up and get the ip-address for the client 
redirected to the impersonater.

The SSL domain name certificates carry the original domain name  the 
client validates the domain name certificate with one of the public keys in 
the browser CA table ... and then validates that the server that it is 
communicating with can sign/encrypt something with the private key that 
corresponds to the public key carried in the certificate ... and then the 
client compares the domain name in the certificate with the URL that the 
browser used.  In theory, if all of that works  then it is highly 
unlikely that the client is talking to the wrong ip-address (since it 
should be the ip-address of the server that corresponds to the server).

So what are the subsequent problems:

1) the original idea was that the whole shopping experience was protected 
by the SSL domain name certificate  preventing MITM  impersonation 
attacks. However, it was found that SSL overhead was way to expensive and 
so the servers dropped back to just using it for encryption of the shopping 
experience. This means that the client ... does all their shopping ... with 
the real server or the imposter ... and then clicks on a button to check 
out that drops the client into SSL for the credit card number. The problem 
is that if it is an imposter ... the button likely carries a URL for which 
the imposter has a valid certificate for.

or

2) the original concern was possible ip-address hijacking in the domain 
name infrastructure  so the correct domain name maps to the wrong ip 
address  and the client goes to an imposter (whether or not the 
imposter needs to do an actual MITM or not). The problem is that when 
somebody approaches a CA for a certificate  the CA has to contact the 
domain name system as to the true owner of the domain name. It turns out 
that integrity issues in the domain name infrastructure not only can result 
in ip-address take-over  but also domain name take-over. The imposter 
exploits integrity flaws in the domain name infrastructure and does a 
domain name take-over  approaches a CA for a SSL domain name 
certificate ... and the CA issues it ... because the domain name 
infrastructure claims it is the true owner.

So somewhat from the CA industry ... there is a proposal that people 
register a public key in the domain name database when they obtain a domain 
name. After that ... all communication is digitally signed and validated 
with the database entry public key (notice this is certificate-less). This 
has the attribute of improving the integrity of the domain name 
infrastructure ... so the CA industry can trust the domain name 
infrastructure integrity so the rest of the world can trust the SSL comain 
name certificates?

This has the opportunity for simplifying the  SSL domain name certificate 
requesting process. The entity requesting the SSL domain name certificate 
 digitally signs the request (certificate-less of course). The CA 
validates the SSL domain name certificate request by retrieving the valid 
owner's public key from the domain name infrastructure database to 
authenticate the request. This is a lot more efficient and has less 
vulnerabilities than the current infrastructure.

The current infrastructure has some identification of the domain name owner 
recorded in the domain name infrastructure database. When an entity 
requests a SSL domain name certificate ... they provide additional 
identification to the CA. The CA now has to retrieve the information from 
the domain name infrastructure database and map it to some real world 
identification. They then have to take the requester's information and also 
map it to 

Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Weinstein
Ian Grigg wrote:

Tom Weinstein wrote:
 

In threat analysis, you have to base your assessment on capabilities,
not intentions. If an attack is possible, then you must guard against
it. It doesn't matter if you think potential attackers don't intend to
attack you that way, because you really don't know if that's true or not
and they can always change their minds without telling you.
   

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.
This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.
The economic view might be a reasonable view for an end-user to take, 
but it's not a good one for a protocol designer. The protocol designer 
doesn't have an economic model for how end-users will end up using the 
protocol, and it's dangerous to assume one. This is especially true for 
a protocol like TLS that is intended to be used as a general solution 
for a wide range of applications.

In some ways, I think this is something that all standards face. For any 
particular application, the standard might be less cost effective than a 
custom solution. But it's much cheaper to design something once that 
works for everyone off the shelf than it would be to custom design a new 
one each and every time.

--
Give a man a fire and he's warm for a day, but set   | Tom Weinstein
him on fire and he's warm for the rest of his life.  | [EMAIL PROTECTED] 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  In threat analysis, you base your assessment on
  economics of what is reasonable to protect.  It
  is perfectly valid to decline to protect against
  a possible threat, if the cost thereof is too high,
  as compared against the benefits.
 
 The cost of MITM protection is, in practice, zero.


Not true!  The cost is from 10 million dollars to
100 million dollars per annum.  Those certs cost
money, Perry!  All that sysadmin time costs money,
too!  And all that managerial time trying to figure
out why the servers don't just work.  All those
consultants that come in and look after all those
secure servers and secure key storage and all that.

In fact, it costs so much money that nobody bothers
to do it *unless* they are forced to do it by people
telling them that they are being irresponsibly
vulnerable to the MITM!  Whatever that means.

Literally, nobody - 1% of everyone - runs an SSL
server, and even only a quarter of those do it
properly.  Which should be indisputable evidence
that there is huge resistance to spending money
on MITM.


 Indeed, if you
 wanted to produce an alternative to TLS without MITM protection, you
 would have to spend lots of time and money crafting and evaluating a
 new protocol that is still reasonably secure without that
 protection. One might therefore call the cost of using TLS, which may
 be used for free, to be substantially lower than that of an
 alternative.


I'm not sure how you come to that conclusion.  Simply
use TLS with self-signed certs.  Save the cost of the
cert, and save the cost of the re-evaluation.

If we could do that on a widespread basis, then it
would be worth going to the next step, which is caching
the self-signed certs, and we'd get our MITM protection
back!  Albeit with a bootstrap weakness, but at real
zero cost.

Any merchant who wants more, well, there *will* be
ten offers in his mailbox to upgrade the self-signed
cert to a better one.  Vendors of certs may not be
the smartest cookies in the jar, but they aren't so
dumb that they'll miss the financial benefit of self-
signed certs once it's been explained to them.

(If you mean, use TLS without certs - yes, I agree,
that's a no-won.)


 How low does the risk have to get before you will be willing not just
 to pay NOT to protect against it? Because that is, in practice, what
 you would have to do. You would actually have to burn money to get
 lower protection. The cost burden is on doing less, not on doing
 more.


This is a well known metric.  Half is a good rule of
thumb.  People will happily spend X to protect themselves
from X/2.  Not all the people all the time, but it's
enough to make a business model out of.  So if you
were able to show that certs protected us from 5-50
million dollars of damage every year, then you'd be
there.

(Mind you, where you would be is, proposing that certs
would be good to make available.  Not compulsory for
applications.)


 There is, of course, also the cost of what happens when someone MITM's
 you.


So I should spend the money.  Sure.  My choice.


 You keep claiming we have to do a cost benefit analysis, but what is
 the actual measurable financial benefit of paying more for less
 protection?


Can you take that to the specific case?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
  The cost of MITM protection is, in practice, zero.
 
 Not true!  The cost is from 10 million dollars to
 100 million dollars per annum.  Those certs cost
 money, Perry!

They cost nothing at all. I use certs every day that I've created in
my own CA to provide MITM protection, and I paid no one for them. It
isn't even hard to do.

Repeat after me:
TLS is not only for protecting HTTP, and should not be mistaken for https:.
TLS is not X.509, and should not be mistaken for X.509.
TLS is also not buy a cert from Verisign, and should not be
mistaken for buy a cert from Verisign.

TLS is just a pretty straightforward well analyzed protocol for
protecting a channel -- full stop. It can be used in a wide variety of
ways, for a wide variety of apps. It happens to allow you to use X.509
certs, but if you really hate X.509, define an extension to use SPKI
or SSH style certs. TLS will accommodate such a thing easily. Indeed, I
would encourage you to do such a thing.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL

2003-07-10 Thread Whyte, William
[ Jill ]
  Instead, I have a
  different question: Where can I learn about SSL?

[ Ian ]

 PS: next step is Ferguson  Schneier's recent book
 which has been described as how to re-invent SSL.

This reminds me: the best tutorial on the security 
aspects of SSL 3.0 that I know of is the Counterpane
analysis paper, avaiable from:
http://www.counterpane.com/ssl.html

Read it to get a good idea of why certain decisions were
made, and why they help. It doesn't tell you how to use
OpenSSL, but it's great to let you know what's going on
under the bonnet.

(I kindof feel like the new Ferguson  Schneier book would
have been better if it had simply been this paper expanded
to book length...)

Cheers,

William

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL

2003-07-10 Thread Eric Rescorla
Ian Grigg [EMAIL PROTECTED] writes:
Ian Grigg [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] wrote:
 
  Instead, I have a
  different question: Where can I learn about SSL?
 
 Most people seem to think the RFC is unreadable,
 so ...
 
  As in, could someone reccommend a good book, or online tutorial, or
  something, somewhere, that explains it all from pretty much first
  principles, and leaves you knowing enough at the end to be able to make
  sensible use of OpenSSL and similar? I don't want a For Dummies type book
  - as I said, I'm reasonably competent - but I would really like access to a
  helpful tutorial. I want to learn. So what's the best thing to go for?
 
 I am reading Eric Rescorla's book at the moment,
 and if you are serious about SSL, it is worth the
 price to get the coverage.  It's well written,
 and relatively easy to read for a technical book.

 It costs a steep $50.  It's not a For Dummies.
 You have to be comfortable with all sorts of things
 already.
Thanks for the kind words.

Actually, the price should be $40 US. That's the price at Amazon.

 It's giving me the intellectual capital to attack
 the engineering failures therein and surrounding
 the deployment of same.  Maybe Eric will offer me
 $100 for my annotated copy just to shut me the
 f**k up ;-)   I've so far discovered 
No payoffs, but I'd love to know what you've discovered :)

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL

2003-07-10 Thread Radia Perlman - Boston Center for Networking
Re: Eric Rescorla's SSL and TLS book:

  Actually, the price should be $40 US. That's the price at Amazon.

Actually on bookpool.com it's $31. And if you can buy something else
at the same time, they have free shipping on anything over $40.

And let me 3rd or 4th the comment that it's a great book!

Radia


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL

2003-07-10 Thread Eric Murray
On Thu, Jul 10, 2003 at 12:04:33PM +0100, [EMAIL PROTECTED] wrote:
 Instead, I have a
 different question: Where can I learn about SSL?
  
 As in, could someone reccommend a good book, or online tutorial, or
 something, somewhere, that explains it all from pretty much first
 principles, and leaves you knowing enough at the end to be able to make
 sensible use of OpenSSL and similar? 

I'd recommend Eric Rescorla's _SSL And TLS_ book for
learning about the protocol itself.  It's a very
good explanation of the protocol.

A concise explanation of the basic protocol
is in the original SSLv3 protocol spec from Netscape.
It's short but must be read carefully.

There's also a book on Openssl itself, that, from the parts I
have looked at, seems pretty good.
_Network Security with OpenSSL_ (Viega Messier  Chandra).

Like we've covered in this thread, Openssl  has a whole lot of stuff
that isn't needed for doing SSL.  It's the last place you want to start
trying to understand SSL.  Instead, first get a basic understanding of
the SSL protocol from Eric's book.  Then look at Openssl.  Unfortunately
the simpler SSL implementations seem to not be freely available.
If you do java, try Eric's 'pureTLS' java implementation.  

To start in Openssl, look at how the sample client and server apps
work.  Then step through them with a debugger.  The way that Openssl
is constructed with many macros and tables of pointers to functions makes it
difficult to simply read until you come to recognize the names.  Also, to
be honest, the code is written in a style that makes it more difficult to
understand than it should be.  Nothing against Tim and Eric or the current
Openssl crew, but anyone who uses that many single character variable
names needs to be whacked on the butt with a rolled-up copy of KR C
and be told NO in a very firm voice.

Openssl is still changing and what little documentation
they have is often stale.

The openssl-users mailing list is quite active and is pretty
good about answering questions.

Eric


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL

2003-07-10 Thread Ng Pheng Siong
On Thu, Jul 10, 2003 at 12:04:33PM +0100, [EMAIL PROTECTED] wrote:
 guess). However, the complexity of the OpenSSL library has me stumped.
 (Plus, it's Unix-centric. I'd like to turn it into a Visual Studio port so I
 could compile without needing cygwin, gcc, etc., but that's another story).

It isn't really. I have built OpenSSL using MSVC, BC and mingw.

I have a file here called openssl-0_9_7_Patch_VisualStudio6.zip culled from
the OpenSSL mailing list. I haven't tried it; if you want, I can send it to
you off-list.

 I'm not going to complain. That's been done to death here. Instead, I have a
 different question: Where can I learn about SSL?

I always suggest learning by doing. The OpenSSL C API is quite big, but
there exists wrappers in Perl, Python, Tcl, Ruby, Lisp and possibly
whatever high-level language you can think of. (I have one; see .sig.)
These makes programming OpenSSL more accessible.

While your test programs are running, use ekr's excellent ssldump to see
the stuff happening on the wire.

There is also a book called SSL and TLS Essentials by Stephen Thomas that
just describes the protocol. Refer to the book while you're running your
programs and marveling at ssldump's output.

Have fun.

-- 
Ng Pheng Siong [EMAIL PROTECTED] 

http://firewall.rulemaker.net  -+- Manage Your Firewall Rulebase Changes
http://www.post1.com/home/ngps -+- Open Source Python Crypto  SSL

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]