Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Daniel Carosone
On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
  So we need to see a Choicepoint for listening and sniffing and so
  forth.
 
 No, we really don't.

Perhaps we do - not so much as a source of hard statistical data, but
as a source of hard pain.

People making (uninformed or ill-considered, despite our best efforts
to inform) business and risk decisions seemingly need concrete
examples to avoid.

Its depressing how much of what we actually achieve is determined by
primitive pain response reflexes - even when you're in the beneficial
position of having past insistences validated by the pain of others.

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor at managing complexity, and that
 human error is extremely pervasive. I've yet to sit in a conference
 room and think oh, if I only had more statistical data, but I've
 frequently been frustrated by gross incompetence.

Amen.

--
Dan.


pgppCusu69AQW.pgp
Description: PGP signature


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Perry E. Metzger

Daniel Carosone [EMAIL PROTECTED] writes:
 On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
  So we need to see a Choicepoint for listening and sniffing and so
  forth.
 
 No, we really don't.

 Perhaps we do - not so much as a source of hard statistical data, but
 as a source of hard pain.

That might not be such a bad thing. Object lessons have a way of
whipping people in to shape. A few more heads rolling might convince
others that security isn't optional.

In the late 1960s, several major brokerage firms went under because
they didn't have their accounting systems sufficiently automated. The
people on the business people thought of I.T. as a necessary evil
rather than as the backbone of their business, and they paid the
price.

At intervals, business gets major accounting scandals, about every 20
to 40 years when people forget about the last set. I suspect
I.T. crises are similar. It has been so long since the last one
happened in the financial industry that the institutional memory of it
is now gone, so we're ripe for another.

It is my prediction that we will, in the next five years, get the
failure of a couple of international financial institutions because of
insufficient attention to systems security, again because there are a
few executives in the business who do not understand that I.T. is not
an expense that needs managing but rather the nervous system of the
company.

 People making (uninformed or ill-considered, despite our best efforts
 to inform) business and risk decisions seemingly need concrete
 examples to avoid.

Indeed.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Trojan horse attack involving many major Israeli companies, executives

2005-06-01 Thread Amir Herzberg

J.A. Terranson wrote:


So, how long before someone, possibly even me, points out that all
Checkpoint software is built in Israel?


Nicely put, but I think not quite fair. From friends in financial and 
other companies in the states and otherwise, I hear that Trojans are 
very common there as well. In fact, based on my biased judgement and 
limited exposure, my impression is that security practice is much better 
in Israeli companies - both providers and users of IT - than in 
comparable companies in most countries. For example, in my `hall of 
shame` (link below) you'll find many US and multinational companies 
which don't protect their login pages properly with SSL (PayPal, Chase, 
MS, ...). I've found very few Israeli companies, and of the few I've 
found, two actually acted quickly to fix the problem - which is rare! 
Most ignored my warning, and few sent me coupons :-) [seriously]


Could it be that such problems are more often covered-up in other 
countries? Or maybe that the stronger awareness in Israel also implies 
more attackers? I think both conclusions are likely. I also think that 
this exposure will further increase awareness among Israeli IT managers 
and developers, and hence improve the security of their systems.

--
Best regards,

Amir Herzberg

Associate Professor
Department of Computer Science
Bar Ilan University
http://AmirHerzberg.com

New: see my Hall Of Shame of Unprotected Login pages: 
http://AmirHerzberg.com/shame.html


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Wednesday 01 June 2005 10:35, Birger Tödtmann wrote:
 Am Dienstag, den 31.05.2005, 18:31 +0100 schrieb Ian G:
 [...]

  As an alternate hypothesis, credit cards are not
  sniffed and never will be sniffed simply because
  that is not economic.  If you can hack a database
  and lift 10,000++ credit card numbers, or simply
  buy the info from some insider, why would an
  attacker ever bother to try and sniff the wire to
  pick up one credit card number at a time?

 [...]

 And never will be...?  Not being economic today does not mean it
 couldn't be economic tomorrow.  Today it's far more economic to lift
 data-in-rest because it's fairly easy to get on an insider or break into
 the database itself.

Right, so we are agreed that listening to credit cards
is not an economic attack - regardless of the presence
of SSL.

Now, the point of this is somewhat subtle.  It is not
that you should turn off SSL.

The point is this:  you *could*
turn off SSL and it wouldn't make much difference
to actual security in the short term at least, and maybe
not even in the long term depending on the economic
shifts.

OK, so, are we agreed on that:  we *could* turn off
SSL, but that isn't the same thing as should* ?

If we've got that far we can go to the next step.

If we *could* turn off SSL then we have some breathing
space, some room to manouvre.  Some wiggle room.

Which means we could modify the model.  Which
means we could change the model, we could tune
the crypto or the PKI.  And in the short term, that
would not be a problem for security because there
isn't an economic attack anyway.  Right now, at
least.

OK so far?

This means that we could improve or decrease
its strength ... as our objectives suggest ... or we
could *re-purpose* SSL if this were so desired.

So we could for example use SSL and PKI to
protect from something else.  If that were an issue.

Let's assume phishing is an issue (1.2 billion
dollars of american money is the favourite number).

If we could figure out a way to change the usage
of SSL and PKI to protect against phishing, would
that be a good idea?

It wouldn't be a bad idea, would it?  How could it
be a bad idea when the infrastructure is in place,
and is not currently being used to defeat any
attack?

So, even in a stupidly aggressive worst case
scenario, if were to turn off SSL/PKI in the process
and turn its benefit over to phishing, and discover
that it no longer protects against listening attacks
at all - remember I'm being ridiculously hypothetical
here - then as long as it did *some* benefit in
stopping phishing, that would still be a net good.

That is, there would be some phishing victims
who would thank you for saving them, and there
would *not* be any Visa merchants who would
necessarily damn your grandmother for losing
credit cards.  Not in the short term at least.

And if listening were to erupt in a frenzy in the
future it would likely be possible to turn off the
anti-phishing tasking and turn SSL/PKI back to
protecting against eavesdropping.  Perhaps as
a tradeoff between the credit card victim and
the phishing victim.

But that's just stupidly hypothetical.  The main
thing is that we can fiddle with SSL/PKI if we want
to and we can even afford to make some mistakes.

So the question then results in - could it be used
to benefit phishing?  I can point at some stuff that
says it will be.

But every time this good stuff is suggested, the
developers, cryptographers, security experts and
what have you suck air between their teeth in and
say you can't change SSL or PKI because of this
crypto blah blah reason.

My point is you can change it.  Of course you
can change it - and here's why:  it's not being
economically used over here (listening), and
right over there (phishing), there is an economic
loss waiting attention.


 However, when companies finally find some 
 countermeasures against both attack vectors, adversaries will adapt and
 recalculate the economics.  And they may very well fall back to sniffing
 for data-in-flight, just as they did (and still do sometimes now) to get
 hold of logins and passwords inside corporate networks in the 80s and
 90s.  If it's more difficult to hack into the database itself than to
 break into a small, not-so-protected system at a large network provider
 and install a sniffer there that silently collects 10,000++ credit card
 numbers over some weeks - then sniffing *is* an issue.  We have seen it,
 and we will see it again.  SSL is a very good countermeasure against
 passive eavesdropping of this kind, and a lot of data suggests that
 active attacks like MITM are seen much less frequently.


All that is absolutely true, in that we can conjecture
that if we close everything else off, then sniffing will
become economic.  That's a fair statement.

But, go and work in one of these places for a while,
or see what Perry said yesterday:

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor at 

Digital signatures have a big problem with meaning

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 23:43, Anne  Lynn Wheeler wrote:
 in most business scenarios ... the relying party has previous knowledge
 and contact with the entity that they are dealing with (making the
 introduction of PKI digital certificates redundant and superfluous).

Yes, this is directly what we found with the signed
contracts for digital instruments (aka ecash).  We did
all the normal digital signature infrastructure (using PGP
WoT and even x.509 PKI for a while) but the digsig
never actually made or delivered any meaningful biz
results.  In contrast, it was all the other steps that
we considered from the biz environment that made
the difference:  a readable contract, a guaruntee
that it wouldn't change, a solid linkage to every
transaction, and so forth and so on.

In the end, the digital signature was just crypto
candy.  We preserve it still because we want to
experiment with WoT between issuers and governance
roles, and because we need a signing process of
some form.  In any small scenario (1000 users)
that sort of linkage is better done outside the tech
and for large scenarios it is simply unproven whether
it can deliver.

http://iang.org/papers/ricardian_contract.html

iang

PS: must look up the exec summary of aads one day!
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Citibank discloses private information to improve security

2005-06-01 Thread Heyman, Michael
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Peter Gutmann
 Sent: Tuesday, May 31, 2005 1:29 PM
 
 In this situation, I believe that the users, through hard won 
 experience with computers, _correctly_ assumed this was a 
 false positive.

 Probably not.
 [SNIP text on user's thoughts on warning dialogs]

The false positive I was referring to is the something is telling me
something unimportant positive. I didn't mean to infer that the users
likely went through a thought process centered around the possible
causes of the certificate failure, specifically the likelihood of an
active man-in-the-middle vs. software bug, vs. setup error, vs. etc..

So, when the box popped up, in the unimportant vs. important choice
that the users went through, they correctly chose unimportant. These
warning dialogs pop up regularly and usually they are crying wolf.

I've probably seen hundreds of signature validation warnings from
various web-sites for certificates and Active-X and possibly other
signed content. I can't recall needing to heed even one of the warnings.
We are trying to detect man-in-the-middle or outright spoofing with
signatures and our false positive rate is through the roof. The false
positive rate must be zero or nearly zero to work as a useful detector
in real world situations.

Defense in depth can help against spoofing - this includes valid
certificates, personalization (even if it is the less-than-optimal
Citibank-like solution), PetName, etc. Man-in-the-middle is harder given
that we have such a high false positive rate on our best weapon.

-Michael

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Digital signatures have a big problem with meaning

2005-06-01 Thread dan

Ian G writes:
 | 
 | In the end, the digital signature was just crypto
 | candy...
 |


On the one hand a digital signature should matter more
the bigger the transaction that it protects.  On the
other hand, the bigger the transaction the lower the
probability that it is between strangers who have no
other leverage for recourse.

And, of course, proving anything by way of dueling 
experts doesn't provide much predictability in a jury
system, e.g., OJ Simpson.

--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 23:43, Perry E. Metzger wrote:
 Ian G [EMAIL PROTECTED] writes:

Just on the narrow issue of data - I hope I've
addressed the other substantial points in the
other posts.

  The only way we can overcome this issue is data.

 You aren't going to get it. The companies that get victimized have a
 very strong incentive not to share incident information very
 widely.

On the issue of sharing data by victims, I'd strongly
recommend the paper by Schechter and Smith, FC03.
 How Much Security is Enough to Stop a Thief?
http://www.eecs.harvard.edu/~stuart/papers/fc03.pdf
I've also got a draft paper that argues the same thing
and speaks directly and contrarily to your statement:

Sharing data is part of the way towards better security.

(But I argue it from a different perspective to SS.)


 1) You have one anecdote. You really have no idea how
frequently this happens, etc.

The world for security in the USA changed dramatically
when Choicepoint hit.  Check out the data at:

http://pipeda.blogspot.com/2005/02/summaries-of-incidents-cataloged-on.html
http://www.strongauth.com/regulations/sb1386/sb1386Disclosures.html

Also, check out Adam's blog at

http://www.emergentchaos.com/

He has a whole category entitled Choicepoint for
background reading:

http://www.emergentchaos.com/archives/cat_choicepoint.html

Finally we have our data in the internal governance
and hacking breaches.  As someone said today, Amen
to that.  No more arguments, just say Choicepoint.

 2) It doesn't matter how frequently it happens, because no two
companies are identical. You can't run 100 choicepoints and see
what percentage have problems.

We all know that the attacker is active and can
change tactics.  But locksmiths still recommend
that you put a lock on your door that is a) a bit
stronger than the door and b) a bit better than your
neighbours.  Just because there are interesting
quirks and edge cases in these sciences doesn't
mean we should wipe out other aspects of our
knowledge of scientific method.

 3) If you're deciding on how to set up your firm's security, you can't
say 95% of the time no one attacks you so we won't bother, for
the same reason that you can't say if I drive my car while
slightly drunk 95% of the time I'll arrive safe, because the 95%
of the time that nothing happens doesn't matter if the cost of the
5% is so painful (like, say, death) that you can't recover from
it.

Which is true regardless of whether you are
slightly drunk or not at all or whether a few
pills had been taken or tiredness hits.

Literally, like driving when not 100% fit, the
decision maker makes a quick decision based
on what they know.  The more they know, the
better off they are.  The more data they have,
the better informed their decision.

In particular, you don't want to be someone on who's watch a 
major breech happens. Your career is over even if it never happens
to anyone else in the industry.

Sure.  Life's a bitch.  One can only do ones
best and hope it doesn't hit.  But have a read
of SS' paper, and if you still have the appetite,
try my draft:

http://iang.org/papers/market_for_silver_bullets.html

 Statistics and the sort of economic analysis you speak of depends on
 assumptions like statistical independence and the ability to do
 calculations. If you have no basis for calculation and statistical
 independence doesn't hold because your actors are not random processes
 but intelligent actors, the method is worthless.

No, that's way beyond what I was saying.

I was simply asserting one thing:  without data, we do
not know if an issue exists.  Without even a vaguely
measured sense of seeing it in enough cases to know
it is not an anomoly, we simply can't differentiate it
from all the other conspiracy theories, FUD sales,
government agendas, regulatory hobby horses,
history lessons written by victors, or what-have-you.

Ask any manager.  Go to him or her with a new
threat.  He or she will ask who has this happened
to?

If the answer is it used to happen all the time in
1994 ... then a manager could be forgiven for
deciding the data was stale.  If the answer is
no-one, then no matter how risky, the likely
answer is get out!  If the answer is these X
companies in the last month then you've got
some mileage.

Data is everything.

iang
-- 
Advances in Financial Cryptography:
   https://www.financialcryptography.com/mt/archives/000458.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
Hi Birger,

Nice debate!


On Wednesday 01 June 2005 13:52, Birger Tödtmann wrote:
 Am Mittwoch, den 01.06.2005, 12:16 +0100 schrieb Ian G:
 [...]

  The point is this:  you *could*
  turn off SSL and it wouldn't make much difference
  to actual security in the short term at least, and maybe
  not even in the long term depending on the economic
  shifts.

 Which depends a bit on the scale of your could switch of.  If some
 researchers start switching it off / inventing / testing something new,
 then your favourite phisher would not care, that's right.

Right.  That's the point.  It is not a universal
and inescapable bad to fiddle with SSL/PKI.

 [...]

  But every time this good stuff is suggested, the
  developers, cryptographers, security experts and
  what have you suck air between their teeth in and
  say you can't change SSL or PKI because of this
  crypto blah blah reason.
 
  My point is you can change it.  Of course you
  can change it - and here's why:  it's not being
  economically used over here (listening), and
  right over there (phishing), there is an economic
  loss waiting attention.

 Maybe.  But there's a flip-side to that coin.  SSL and correlated
 technology helped to shift the common attack methods from sniffing (it
 was widely popular back then to install a sniffer whereever a hacker got
 his foot inside a network) towards advanced, in some sense social
 engineering attacks like phishing *because* it shifted the economics
 for the adversaries as it was more and more used to protect sensitive
 data-in-flight (and sniffing wasn't going to get him a lot of credit
 card data anymore).


OK, and that's where we get into poor use of
data.  Yes, sniffing of passwords existed back
then.  So we know that sniffing is quite possible
and on reasonable scale, plausible technically.

But the motive of sniffing back then was different.
It was for attacking boxes.  Access attack.  Not
for the purpose of theft of commercial data.  It
was a postulation that those that attacked boxes
for access would also sniff for credit cards.  But,
we think that to have been a stretch (hence the
outrageous title of this post) at least up until
recently.

Before 2004, these forces and
attackers were disconnected.  In 2004 they joined
forces.  In which case, you do now have quite a
good case that the installation of sniffers could be
used if there was nothing else worth picking up.
So at least we now have the motive cleared up,
if not the economic attack.

(Darn ... I seem to have argued your case for you ;-) )

 That this behaviour (sniffing) is a thing of the past does not mean it's
 not coming back to you if things are turned around: adversaries are
 strategically thinking people that adapt very fast to new circum-
 stances.

Indeed.  It also doesn't mean that they will come
and attack.  Maybe it is a choice between the
attack that is happening right now and the attack
that will come back.  Or maybe the choice is
not really there, maybe we can cover both if
we put our thinking caps on?

 The discussion reminds me a bit of other popular economic issues: Many
 politicians and some economists all over the world, every year, are
 coming back to asking Can't we loosen the control on inflation a bit?
 Look, inflation is a thing of the past, we never got over 3% the last
 umteenth years, lets trigger some employment by relaxing monetary
 discipline now.  The point is: it might work - but if not, your economy
 may end up in tiny little pieces.  It's quite a risk, because you cannot
 test it.  So the stance of many people is to be very conservative on
 things like that - and security folks are no exception.  Maybe fiddling
 with SSL is really a nice idea.  But if it fails at some point and we
 don't have a fallback infrastructure that's going to protect us from the
 sniffer-collector of the 90s, adversaries will be quite happy to bring
 them to new interesting uses then

Nice analogy!  Like all analogies it should be taken
for descriptive power not presecription.

The point being that one should not slavishly stick
to an argument, one needs to establish principles.
One principle is that we protect where money is being
lost, over and above somewhere where someone
says it was once lost in the past.  And at least then
we'll learn the appropriate balance when we get it
wrong, which can't be much worse than now, coz
we are getting it really wrong at the moment.

(On the monetary economics analogy, if you said your
principle was to eliminate inflation, I'd say fine!  There
is an easy way to do just that, just use gold as money,
which has maintained its value throughout recorded
history, not just the last century!  The targets debate
has been echoing on for decades, and there is no
real end in sight.)

  So I would suggest that listening for credit cards will
  never ever be an economic attack.  Sniffing for random
  credit cards at the doorsteps of amazon will never ever
  be an economic attack, not because it isn't possible,

Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Ian G
On Tuesday 31 May 2005 19:38, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Ian G writes:
 On Tuesday 31 May 2005 02:17, Steven M. Bellovin wrote:
  In message [EMAIL PROTECTED], James A. Donald writes:
  --
  PKI was designed to defeat man in the middle attacks
  based on network sniffing, or DNS hijacking, which
  turned out to be less of a threat than expected.
 
  First, you mean the Web PKI, not PKI in general.
 
  The next part of this is circular reasoning.  We don't see network
  sniffing for credit card numbers *because* we have SSL.
 
 I think you meant to write that James' reasoning is
 circular, but strangely, your reasoning is at least as
 unfounded - correlation not causality.  And I think
 the evidence is pretty much against any causality,
 although this will be something that is hard to show,
 in the absence.

 Given the prevalance of password sniffers as early as 1993, and given
 that credit card number sniffing is technically easier -- credit card
 numbers will tend to be in a single packet, and comprise a
 self-checking string, I stand by my statement.


Well, I'm not arguing it is technically hard.  It's just
un-economic.  In the same sense that it is not technically
difficult for us to get in a car and go run someone
over;  but we still don't do it.  And we don't ban the
roads nor insist on our butlers walking with a red
flag in front of the car, either.  Well, not any more.

So I stand by my statement - correlation is not causality.

  * AFAICS, a non-trivial proportion of credit
 card traffic occurs over totally unprotected
 traffic, and that has never been sniffed as far as
 anyone has ever reported.  (By this I mean lots of
 small merchants with MOTO accounts that don't
 bother to set up proper SSL servers.)

 Given what a small percentage of ecommerce goes to those sites, I don't
 think it's really noticeable.


Exactly my point.  Sniffing isn't noticeable.  Neither
in the cases we know it could happen, nor in the
areas.  The one place where it has been noticed is
with passwords and what we know from that experience
is that even the slightest security works to overcome
that threat.  SSH is overkill, compared to the passwords
mailouts that successfully protect online password sites.

  * We know that from our experiences
 of the wireless 802.11 crypto - even though we've
 got repeated breaks and the FBI even demonstrating
 how to break it, and the majority of people don't even
 bother to turn on the crypto, there remains practically
 zero evidence that anyone is listening.
 
   FBI tells you how to do it:
   https://www.financialcryptography.com/mt/archives/000476.

 Sure -- but setting up WEP is a nuisance.  SSL (mostly) just works.

SSH just works - and it worked directly against the
threat you listed above (password sniffing).  But it
has no PKI to speak of, and this discussion is about
whether PKI protects people, because it is PKI that is
supposed to protect against spoofing - a.k.a. phishing.

And it is PKI that makes SSL just doesn't set up.
Anyone who's ever had to set up an Apache web
server for SSL has to have asked themselves the
question ... why doesn't this just work ?

 As 
 for your assertion that no one is listening, I'm not sure what kind of
 evidence you'd seek.  There's plenty of evidence that people abuse
 unprotected access points to gain connectivity.

Simply, evidence that people are listening.  Sniffing
by means of the wire.

Evidence that people abuse to gain unprotected
access is nothing to do with sniffing traffic to steal
information.  That's theft of access, which is a fairly
minor issue, especially as it doesn't have any
economic damages worth speaking of.  In fact,
many cases seem to be more accidental access
where neighbours end up using each other's access
points because the software doesn't know where the
property lines are.


  Since many of
  the worm-spread pieces of spyware incorporate sniffers, I'd say that
  part of the threat model is correct.
 
 But this is totally incorrect!  The spyware installs on the
 users' machines, and thus does not need to sniff the
 wire.  The assumption of SSL is (as written up in Eric's
 fine book) that the wire is insecure and the node is
 secure, and if the node is insecure then we are sunk.

 I meant precisely what I said and I stand by my statement.  I'm quite
 well aware of the difference between network sniffers and keystroke
 loggers.


OK, so maybe I am incorrectly reading this - are you
saying that spyware is being delivered that incorporates
wire sniffers?  Sniffers that listen to the ethernet traffic?

If that's the case, that is the first I've heard of it.  What
is it that these sniffers are listening for?

   Eric's book and 1.2 The Internet Threat Model
   http://iang.org/ssl/rescorla_1.html
 
 Presence of keyboard sniffing does not give us any
 evidence at all towards wire sniffing and only serves
 to further embarrass the SSL threat model.
 
  As for DNS hijacking -- that's what's