At 11:43 PM 6/8/2003 +0100, Dave Howe wrote:
>HTTPS works just fine.
>The problem is - people are broken.
>At the very least, verisign should say "ok so '..go1d..' is a valid server
>address, but doesn't it look suspiously similar to this '..gold..' site over
>here?" for https://pseudo-gold-site/ - but really, if users are going to
>fill in random webforms sent by email, they aren't going to be safe under
>any circumstances; the thing could send by unsecured http to any site on the
>planet, then redirect to the real gold site for a generic "transaction
>completed" or even "failed" screen
>A world where a random paypal hack like this one doesn't work is the same as
>the world where there is no point sending out a Nigerian as you will never
>make a penny on it - and yet, Nigerian is still profitable for the con
>artists.

in a world where there are repeated human mistakes/failures .... at some 
point it is recognized that people aren't perfect and the design is changed 
to accommodate peoples foibles. in some respects that is what helmets, seat 
belts, and air bags have been about.

in the past systems have designed long, complicated passwords that are hard 
to remember and must be changed every month. that almost worked when i 
person had to deal with a single shared-secret. when it became a fact of 
life that a person might have tens of such different interfaces it became 
impossible. It wasn't the fault of any specific institution, it was a 
failure of humans being able to deal with large numbers of extremely 
complex, frequently changing passwords. Because of known human foibles, it 
might be a good idea to start shifting from an infrastructure with large 
numbers of shared-secrets to a non-shared-secret paradigm.

at a recent cybersecurity conference, somebody made the statement that (of 
the current outsider, internet exploits, approximately 1/3rd are buffer 
overflows, 1/3rd are network traffic containing virus that infects a 
machine because of automatic scripting, and 1/3 are social engineering 
(convince somebody to divulge information). As far as I know, evesdropping 
on network traffic  doesn't even show as a blip on the radar screen.

In the following thread on a financial  authentication white paper:
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication 
white paper

there is point made that X9.59 standard doesn't directly address the 
Privacy aspect of security (i.e. no encryption or hiding of data). However, 
the point is made that it changes the paradigm so that the financial 
account number no longer represents a shared-secret and that it can be 
supported with two-factor authentication  i.e. "something you have" token 
and "something you know" PIN. The "something you know" PIN is used to 
enable the token, but is not a shared secret. Furthermore, strong 
authentication can be justification for eliminating the need for name or 
other identification information in the transaction.

However, if X9.59 strong authentication is used with two-factor 
authentication and no identification information is necessary .... then it 
would make people more suspicious if privacy information was requested. 
Also, since privacy information is no longer sufficient for performing a 
fraudulent transaction, it might mitigate that kind of social engineering 
attack.

The types of social engineering attacks then become convincing people to 
insert their hardware token and do really questionable things or mailing 
somebody their existing hardware token along with the valid pin (possibly 
as part of an exchange for replacement). The cost/benefit ratio does start 
to change since there is now much more work on the crooks part for the same 
or less gain. One could also claim that such activities are just part of 
child-proofing the environment (even for adults). On the other hand, it 
could be taken as analogous to designing systems to handle observed failure 
modes (even when the failures are human and not hardware or software). 
Misc. identify theft and credit card fraud reference:
http://www.consumer.gov/idtheft/cases.htm
http://www.usdoj.gov/criminal/fraud/idtheft.html
http://www.garlic.com/~lynn/aadsm14.htm#22 Identity Theft Losses Expect to 
hit $2 trillion
http://www.garlic.com/~lynn/subpubkey.html#fraud


Slightly related in recent thread that brought up buffer overflow exploits
http://www.garlic.com/~lynn/2003j.html#4 A Dark Day

and the report that multics hasn't ever had a buffer overflow exploit
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from 
the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from 
the Multics Security Evaluation

somebody (else) commented (in the thread) that anybody that currently 
(still) writes code resulting in buffer overflow exploit maybe should be 
thrown in jail.



--
Anne & Lynn Wheeler    http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm

Reply via email to