Re: Trojan horse attack involving many major Israeli companies, executives

2005-06-01 Thread Anne Lynn Wheeler

Amir Herzberg wrote:
Nicely put, but I think not quite fair. From friends in financial and 
other companies in the states and otherwise, I hear that Trojans are 
very common there as well. In fact, based on my biased judgement and 
limited exposure, my impression is that security practice is much better 
in Israeli companies - both providers and users of IT - than in 
comparable companies in most countries. For example, in my `hall of 
shame` (link below) you'll find many US and multinational companies 
which don't protect their login pages properly with SSL (PayPal, Chase, 
MS, ...). I've found very few Israeli companies, and of the few I've 
found, two actually acted quickly to fix the problem - which is rare! 
Most ignored my warning, and few sent me coupons :-) [seriously]


Could it be that such problems are more often covered-up in other 
countries? Or maybe that the stronger awareness in Israel also implies 
more attackers? I think both conclusions are likely. I also think that 
this exposure will further increase awareness among Israeli IT managers 
and developers, and hence improve the security of their systems.


there is the story of the (state side) financial institution that was 
outsourcing some of its y2k remediation and failed to perform due 
diligence on the (state side) lowest bidder ... until it was too late 
and they were faced with having to deploy the software anyway.


one of the spoofs of SSL ... was originally it was supposed to be used 
for the whole shopping experience from the URL the enduser entered, thru 
shopping, checkout and payment. webservers found that with SSL they took 
a 80-90% performance hit on their thruput ... so they saved the use of 
SSL until checkout and payment. the SSL countermeasure to MITM-attack is 
that the URL the user entered is checked against the URL in the 
webserver certificate. However, the URL the users were entering weren't 
SSL/HTTPS ... they were just standard stuff ... and so there wasn't any 
countermeasure to MITM-attack.


If the user had gotten to a spoofed MITM site ... they could have done 
all their shopping and then clicked the checkout button ... which might 
provide HTTPS/SSL. however, if it was a spoofed site, it is highly 
probable that the HTTPS URL provided by the (spoofed site) checkout 
button was going to match the URL in any transmitted digital 
certificate. So for all, intents and purposes .. most sites make very 
little use of https/ssl as countermeasure for MITM-attacks ... simply 
encryption as countermeasure for skimming/harvesting (evesdropping).


in general, if the naive user is clicking on something that obfuscates 
the real URL (in some case they don't even have to obfuscate the real 
URL) ... then the crooks can still utilize https/ssl ... making sure 
that they have a valid digital certificate that matches the URL that 
they are providing.


the low-hanging fruit of fraud ROI ... says that the crooks are going to 
go after the easiest target, with the lowest risk, and the biggest 
bang-for-the buck. that has mostly been the data-at-rest transaction 
files. then it is other attacks on either of the end-points. attacking 
generalized internet channels for harvesting/skimming appears to be one 
of the lowest paybacks for the effort. in other domains, there have been 
harvesting/skimming attacks ... but again mostly on end-points ... and 
these are dedicated/concentrated environments where the only traffic ... 
is traffic of interest (any extraneous/uninteresting stuff has already 
been filtered out).




Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Anne Lynn Wheeler
Peter Gutmann wrote:
Neither.  Currently they've typically been smart-card cores glued to the 
MB and accessed via I2C/SMB.
and chips that typically have had eal4+ or eal5+ evaluations. hot topic 
in 2000, 2001 ... at the intel developer's forums and rsa conferences



Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Anne Lynn Wheeler
Erwann ABALEA wrote:
  I've read your objections. Maybe I wasn't clear. What's wrong in
installing a cryptographic device by default on PC motherboards?
I work for a PKI 'vendor', and for me, software private keys is a
nonsense. How will you convice Mr Smith (or Mme Michu) to buy an
expensive CC EAL4+ evaluated token, install the drivers, and solve the
inevitable conflicts that will occur, simply to store his private key? You
first have to be good to convice him to justify the extra depense.
If a standard secure hardware cryptographic device is installed by default
on PCs, it's OK! You could obviously say that Mr Smith won't be able to
move his certificates from machine A to machine B, but more than 98% of
the time, Mr Smith doesn't need to do that.
Installing a TCPA chip is not a bad idea. It is as 'trustable' as any
other cryptographic device, internal or external. What is bad is accepting
to buy a software that you won't be able to use if you decide to claim
your ownership... Palladium is bad, TCPA is not bad. Don't confuse the
two.
the cost of EAL evaluation typically has already been amortized across 
large number of chips in the smartcard market. the manufactoring costs 
of such a chip is pretty proportional to the chip size ... and the thing 
that drives chip size tends to be the amount of eeprom memory.

in tcpa track at intel developer's forum a couple years ago ... i gave a 
talk and claimed that i had designed and significantly cost reduced such 
a chip by throwing out all features that weren't absolutely necessary 
for security. I also mentioned that two years after i had finished such 
a design ... that tcpa was starting to converge to something similar. 
the head of tcpa in the audience quiped that i didn't have a committee 
of 200 helping me with the design.



Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Anne Lynn Wheeler
Peter Gutmann wrote:
Neither.  Currently they've typically been smart-card cores glued to the 
MB and accessed via I2C/SMB.
and chips that typically have had eal4+ or eal5+ evaluations. hot topic 
in 2000, 2001 ... at the intel developer's forums and rsa conferences



Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Anne Lynn Wheeler
Erwann ABALEA wrote:
  I've read your objections. Maybe I wasn't clear. What's wrong in
installing a cryptographic device by default on PC motherboards?
I work for a PKI 'vendor', and for me, software private keys is a
nonsense. How will you convice Mr Smith (or Mme Michu) to buy an
expensive CC EAL4+ evaluated token, install the drivers, and solve the
inevitable conflicts that will occur, simply to store his private key? You
first have to be good to convice him to justify the extra depense.
If a standard secure hardware cryptographic device is installed by default
on PCs, it's OK! You could obviously say that Mr Smith won't be able to
move his certificates from machine A to machine B, but more than 98% of
the time, Mr Smith doesn't need to do that.
Installing a TCPA chip is not a bad idea. It is as 'trustable' as any
other cryptographic device, internal or external. What is bad is accepting
to buy a software that you won't be able to use if you decide to claim
your ownership... Palladium is bad, TCPA is not bad. Don't confuse the
two.
the cost of EAL evaluation typically has already been amortized across 
large number of chips in the smartcard market. the manufactoring costs 
of such a chip is pretty proportional to the chip size ... and the thing 
that drives chip size tends to be the amount of eeprom memory.

in tcpa track at intel developer's forum a couple years ago ... i gave a 
talk and claimed that i had designed and significantly cost reduced such 
a chip by throwing out all features that weren't absolutely necessary 
for security. I also mentioned that two years after i had finished such 
a design ... that tcpa was starting to converge to something similar. 
the head of tcpa in the audience quiped that i didn't have a committee 
of 200 helping me with the design.



Re: Banks Test ID Device for Online Security

2005-01-06 Thread Anne Lynn Wheeler
Bill Stewart wrote:
Yup.  It's the little keychain frob that gives you a string of numbers,
updated every 30 seconds or so, which stays roughly in sync with a server,
so you can use them as one-time passwords
instead of storing a password that's good for a long term.
So if the phisher cons you into handing over your information,
they've got to rip you off in nearly-real-time with a MITM game
instead of getting a password they can reuse, sell, etc.
That's still a serious risk for a bank,
since the scammer can use it to log in to the web site
and then do a bunch of transactions quickly;
it's less vulnerable if the bank insists on a new SecurID hit for
every dangerous transaction, but that's too annoying for most customers.
in general, it is something you have authentication as opposed to the 
common shared-secret something you know authentication.

while a window of vulnerability does exist (supposedly something that 
prooves you are in possession of something you have), it is orders of 
magnitude smaller than the shared-secret something you know 
authentication.

there are two scenarios for shared-secret something you know 
authentication

1) a single shared-secret used across all security domains ... a 
compromise of the shared-secret has a very wide window of vulnerability 
plus a potentially very large scope of vulnerability

2) a unique shaerd-secret for each security domain ... which helps limit 
the scope of a shared-secret compromise. this potentially worked with 
one or two security domains ... but with the proliferation of the 
electronic world ... it is possible to have scores of security domains, 
resulting in scores of unique shared-secrets. scores of unique 
shared-secrets typically results exceeded human memory capacity with the 
result that all shared-secrets are recorded someplace; which in turn 
becomes a new exploit/vulnerability point.

various financial shared-secret exploits are attactive because with 
modest effort it may be possible to harvest tens of thousands of 
shared-secrets.

In one-at-a-time, real-time social engineering, may take compareable 
effort ... but only yields a single piece of authentication material 
with a very narrow time-window and the fraud ROI might be several orders 
of magnitude less. It may appear to still be large risk to individuals 
.. but for a financial institution, it may be relatively small risk to 
cover the situation ... compared to criminal being able to compromise 
50,000 accounts with compareable effort.

In some presentation there was the comment made that the only thing that 
they really needed to do is make it more attactive for the criminals to 
attack somebody else.

It would be preferabale to have a something you have authentication 
resulting in a unique value ... every time the device was used. Then no 
amount of social engineering could result in getting the victim to give 
up information that results in compromise. However, even with relatively 
narrow window of vulnerability ... it still could reduce risk/fraud to 
financial institutions by several orders of magnitude (compared to 
existing prevalent shared-secret something you know authentication 
paradigms).

old standby posting about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61


Re: Banks Test ID Device for Online Security

2005-01-05 Thread Anne Lynn Wheeler
Bill Stewart wrote:
Yup.  It's the little keychain frob that gives you a string of numbers,
updated every 30 seconds or so, which stays roughly in sync with a server,
so you can use them as one-time passwords
instead of storing a password that's good for a long term.
So if the phisher cons you into handing over your information,
they've got to rip you off in nearly-real-time with a MITM game
instead of getting a password they can reuse, sell, etc.
That's still a serious risk for a bank,
since the scammer can use it to log in to the web site
and then do a bunch of transactions quickly;
it's less vulnerable if the bank insists on a new SecurID hit for
every dangerous transaction, but that's too annoying for most customers.
in general, it is something you have authentication as opposed to the 
common shared-secret something you know authentication.

while a window of vulnerability does exist (supposedly something that 
prooves you are in possession of something you have), it is orders of 
magnitude smaller than the shared-secret something you know 
authentication.

there are two scenarios for shared-secret something you know 
authentication

1) a single shared-secret used across all security domains ... a 
compromise of the shared-secret has a very wide window of vulnerability 
plus a potentially very large scope of vulnerability

2) a unique shaerd-secret for each security domain ... which helps limit 
the scope of a shared-secret compromise. this potentially worked with 
one or two security domains ... but with the proliferation of the 
electronic world ... it is possible to have scores of security domains, 
resulting in scores of unique shared-secrets. scores of unique 
shared-secrets typically results exceeded human memory capacity with the 
result that all shared-secrets are recorded someplace; which in turn 
becomes a new exploit/vulnerability point.

various financial shared-secret exploits are attactive because with 
modest effort it may be possible to harvest tens of thousands of 
shared-secrets.

In one-at-a-time, real-time social engineering, may take compareable 
effort ... but only yields a single piece of authentication material 
with a very narrow time-window and the fraud ROI might be several orders 
of magnitude less. It may appear to still be large risk to individuals 
... but for a financial institution, it may be relatively small risk to 
cover the situation ... compared to criminal being able to compromise 
50,000 accounts with compareable effort.

In some presentation there was the comment made that the only thing that 
they really needed to do is make it more attactive for the criminals to 
attack somebody else.

It would be preferabale to have a something you have authentication 
resulting in a unique value ... every time the device was used. Then no 
amount of social engineering could result in getting the victim to give 
up information that results in compromise. However, even with relatively 
narrow window of vulnerability ... it still could reduce risk/fraud to 
financial institutions by several orders of magnitude (compared to 
existing prevalent shared-secret something you know authentication 
paradigms).

old standby posting about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61


Re: Academics locked out by tight visa controls

2004-09-20 Thread Anne Lynn Wheeler
At 08:03 AM 9/20/2004, John Kelsey wrote:
I guess I've been surprised this issue hasn't seen a lot more 
discussion.  It takes nothing more than to look at the names of the people 
doing PhDs and postdocs in any technical field to figure out that a lot of 
them are at least of Chinese, Indian, Arab, Iranian, Russian, etc., 
ancestry.  And only a little more time to find out that a lot of them are 
not citizens, and have a lot of hassles with respect to living and working 
here.  What do you suppose happens to the US lead in high-tech, when we 
*stop* drawing in some large fraction of the smartest, hardest-working 
thousandth of a percent of mankind?

in '94 there was report (possibly sjmn?) that said at least half of all 
cal. univ. tech. PHDs were awarded to foreign born. during some of the tech 
green card discussions in the late '90s ... it was pointed out that the 
internet boom (bubble) was heavily dependent on all these foreign born  
since there was hardly enuf born in the usa to meet the demand.

in the late 90s there were some reports that many of these graduates had 
their education paid by their gov. with directions to enter an us company 
in strategic high tech areas for 4-8 years  and then return home as 
tech transfer effort. i was told in the late 90s about one optical 
computing group in a high tech operation  where all members of the 
group fell into this category (foreign born with obligation to return home 
after some period).

another complicating factor competing for resources during the late 90s 
high-tech, internet boom (bubble?) period was the significant resource 
requirement for y2k remediation efforts.

nsf had recent study on part of this
http://www.nsf.gov/sbe/srs/infbrief/ib.htm
graduate enrollment in science and engineering fields reaches new peak; 1st 
time enrollment of foreign students drops
http://www.nsf.gov/sbe/srs/infbrief/nsf04326/start.htm

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/ 



Re: An attack on paypal

2003-06-12 Thread Anne Lynn Wheeler
At 10:56 AM 6/11/2003 -0400, Sunder wrote:
In either case, we wouldn't need to worry about paying Verisign or anyone
else if we had properly secured DNS.  Then you could trust those pop-up
self-signed SSL cert warnings.
actually, if you had a properly secured DNS  then you could trust DNS 
to distribute public keys bound to a domain name in the same way they 
distribute ip-addresses bound to a domain name.

the certificates serve two purposes: 1) is the server that we think we are 
talking to really the server we are talking to and 2) key-exchange for 
establishing an encrypted channel. a properly secured DNS would allow 
information distributed by DNS to be trusted  including a server's 
public key  and given the public key  it would be possible to do 
the rest of the SSL operation (w/o requiring certificates) which is 
establishing an agreed upon session secret key.
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



virus attack on banks (was attack on paypal)

2003-06-10 Thread Anne Lynn Wheeler
At 06:12 PM 6/8/2003 -0600, Anne  Lynn Wheeler wrote:
at a recent cybersecurity conference, somebody made the statement that (of 
the current outsider, internet exploits, approximately 1/3rd are buffer 
overflows, 1/3rd are network traffic containing virus that infects a 
machine because of automatic scripting, and 1/3 are social engineering 
(convince somebody to divulge information). As far as I know, evesdropping 
on network traffic  doesn't even show as a blip on the radar screen.

virus attempting to harvest (shared-secret, single-factor) passwords at 
financial institutions
http//www.smh.com.au/articles/2003/06/10/1055010959747.html

and somewhat related:
http://www.garlic.com/~lynn/aepay11.htm#53 authentication white paper

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Re: Maybe It's Snake Oil All the Way Down

2003-06-06 Thread Anne Lynn Wheeler
At 04:42 PM 6/4/2003 -0700, Eric Rescorla wrote:
Nonsense. One can simply cache the certificate, exactly as
one does with SSH. In fact, Mozilla at least does exactly
this if you tell it to. The reason that this is uncommon
is because the environments where HTTPS is used
are generally spontaneous and therefore certificate caching
is less useful.


there are actually two scenarios  one is to pre-cache it (so that its 
transmission never actually has to happen) and the other is to compress it 
to zero bytes. detailed discussion of certificate pre-caching and 
certificate zero byte compression:
http://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2

the typical use for HTTPS for e-commerce is to hide the account number on 
its way to the financial institution. for the most part the merchant is 
primarily interested in the response from the consumer's financial 
institution on whether or not the merchant gets paid. If it weren't for the 
associated business processes, the merchant could get by with never knowing 
anything at all about the consumer (the merchant just passes the account 
number on ... and gets back what they are really interested in  the 
notification from the bank that they will get paid).

So a HTTPS type solution is that the consumer pre-caches their bank's 
certificate (when they establish a bank account).  and they transmit 
the account number hidden using the bank's public key. This happens to 
pass thru the merchants processing  but for purposes of the 
authorization, the merchant never really has to see it. The protocol would 
require minor issues of replay attacks  and be able to be done in a 
single round trip  w/o all the SSL protocol chatter. Actually, is isn't 
so much pre-caching their bank's certificate  as loading their bank's 
public key into a table  analogous to the way CA public keys are 
loading into tables (aka using out-of-band processing  the convention 
that they may be self-signed and encoded in a certificate format is an 
anomoly of available software and in no way implies a PKI). The primary 
purpose of HTTPS hasn't been to have a secure channel with the merchant, 
the primary purpose of the HTTPS is to try and hide the consumer's account 
number as it makes its way to the consumer's financial institution.

The other solution is the X9.59 standard (preserve the integrity of the 
financial infrastructure for all electronic retail payments, not just 
internet, not just POS, not just credit, ALL; credit, debit, stored value, 
etc) that creates authenticated transactions and account numbers that can 
only be used in authenticated transaction. The consumer's public key is 
registered in their bank account (out of band process, again no PKI). X9.59 
transactions are signed and transmitted. Since the account number can only 
be used in authenticated transactions  it changes from needing 
encryption to hide the value as part of a shared-secret paradigm to purely 
a paradigm that supports integrity and authentication. As in the above, 
scenario, the merchant passes the value thru on its way to the consumer's 
financial institution; and is focused on getting the approved/disapproved 
answer back about whether they will be paid. As in the bank HTTPS scenario 
where the bank's pubilc key is pre-cached at the consumer, the pre-caching 
of the consumer's public key is pre-cached at the bank  involves no PKI 
business processes (although their may be some similarities that the 
transport of the public key involves encoding in a certificate defined 
format).  misc. x9.59 refs:
http://www.garlic.com/~lynn/index.html#x959

Both pre-caching solutions are between the business entities that are 
directly involved; the consumer and the consumer's financial institution 
based on having an established business relationship.

The invention of PKI was primarily to address the issue of an event between 
two parties that had no prior business relationship and possibly weren't 
going to have any future business relationship and that they would conclude 
their business relying on some mutual trust in the integrity of a 3rd party 
w/o actually having to resort to an online environment. The e-commerce 
scenario is that there is real-time, online transaction with the trusted 
3rd party (the consumer's financial institution) involving prior business 
relationship. This negates the basic original assumptions about the 
environment that PKIs are targeted at addressing.
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Re: Maybe It's Snake Oil All the Way Down

2003-06-06 Thread Anne Lynn Wheeler
At 04:24 PM 6/6/2003 -0700, James A. Donald wrote:

I don't think so.

??? public key registered in place of shared-secret?

NACHA debit trials using digitally signed transactions did it with both 
software keys as well as hardware tokens.
http://internetcouncil.nacha.org/News/news.html
in the above scroll down to July 23, 2001 ... has pointer to detailed report?

X9.59 straight forward establishes it as standard  with some activity 
moving on to ISO
http://www.garlic.com/~lynn/index.html#x959

pk-init draft for kerberos specifies that public key can be registered in 
place of shared secret.

following has demo of it with radius with public keys registered in place 
of shared-secret.
http://www.asuretee.com/
the radius implementation has been done be a number of people.

in all of these cases, there is change in the business process and/or 
business relationship  doesn't introduce totally unrelated parties to 
the business activities. the is digital signing on the senders side 
(actually a subset of existing PKI technology) and digital signature 
verification on the receivers side (again a subset of existing PKI 
technology). To the extent that there is impact on existing business 
process ... it is like in the case of introducing x9.59 authentication for 
credit transactions that have relatively little authentication currently 
 and as a result would eliminate major portion of the existing credit 
card transaction related fraud.

The big issue isn't the availability of the technology ... although there 
is a slight nit in the asuretee case being FIPS186-2, ecdsa  and having 
support in CAPI and related infrastructures. It not working (easily) is 
like when my wife and I were doing the original payment gateway  with 
this little client/server startup in menlo park (later moved to mountain 
view and have since been bought by AOL) and people saying that SSL didn't 
exist ... misc ref from the past
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm