At 10:16 AM -0800 2/7/02, Barry Caplan wrote:

>This is precisely the problem digital signing is meant to solve. 
>Signing means that Alice has encrypted the message with her private 
>key before sending to Bob. Bob then unencrypts the message using 
>Alice's public key. If the message does not unencrypt, then Bob 
>should not trust that the message is from Alice. This algorithm 
>works independent of transport mechanism (email, etc.), or domains. 
>Alice's key stays with Alice,not with the domain. Of course, how you 
>exchange trusted keys in the first place is another matter, but I am 
>sure this is all covered on a security FAQ somewhere.

That's very nice in theory, but it's not the way people use e-mail in 
practice and it's not going to be. Microsoft, a company with a very 
technically literate employee base, might be able to implement this 
scheme (though I doubt it). A company like Exxon never could. The 
system's just too cumbersome.

>>E-mail forgery has been a problem for a long time, but it's always 
>>been one-way. You couldn't trick somebody into sending you a reply 
>>because doing so required using a different e-mail address than the 
>>one they expected, thus revealing the message as forged.
>
>There are many many ways to get a response from someone via email, 
>even if the address is not recognized or forged. Most involve social 
>engineering approaches more than anything else. My mailbox filled 
>with spam will attest the that!

Yes, but that doesn't address the fact that this makes the problem far worse.

>>With a Unicode enabled mailer, that's no longer true. If the fonts 
>>Bob (not me, but Bob) chooses for his e-mail program do not make a 
>>clear distinction between an o and an omicron, this works. There 
>>are lots of other attacks. The Cyrillic and Greek alphabets provide 
>>lots of options for replacing single letters in Latin domain names.
>
>
>Unless all messages are signed (technically feasible) , then there 
>is no trust at all. When Outlook/Exchange supports, in fact 
>requires, messages to be signed, then this problem will start to 
>dwindle away, at least in the email realm.
>

Would that it were so, but it's not. As you suggest, people do trust 
e-mail even when they shouldn't. Trust is a human question decided by 
human beings, not a boolean answer that comes out of a computer 
algorithm. I can trust that the message I'm replying to came from a 
person named "Barry Caplan" even if I have no proof of that 
whatsoever.

>Of course if there is a method to judge the level of trust for 
>properly signed messages that arrive from folks you don't know (a 
>human failability), then knowing the origin of the message might not 
>help much either. My inbound spam can be verifiably signed, but it 
>is still spam.
>
>>In other words, it's not our fault. Blame the client software. 
>>Sounds distressingly like the Unicode Consortium's approach to 
>>these issues. Interestingly, my attack works with a single 
>>character representation (Unicode).
>
>
>Your attack is only a social engineering attack, not a technical 
>weakness inherent in any protocol, or character set (even though 
>there may be such issues)
>

Technical systems can be more or less resistant to social engineering 
attacks. It is the task of the system designers to make the system 
more resistant. I'm reminded of an IBM mainframe system about a 
decade ago where it was possible to change your password by appending 
a slash and the new password to the old password when logging in. Few 
users knew this but hackers did. It wasn't very hard to convince a 
user on the phone that they needed to set their account to debugging 
mode by logging in and appending /DEBUG to the password. This had the 
affect of changing their account password to DEBUG which you knew and 
they didn't. (It's been awhile, but I vaguely recall that this was 
the hack Phiber Optic used to break into the New York City Public 
School System computers)

This particular system was poorly designed and thus vulnerable to a 
social engineering attack. But make no mistake: it was very much a 
design flaw in the system. It was not the user's fault for not 
knowing about an obscure option to change their password at the login 
prompt. It should not have been there in the first place, and once 
discovered it needed to be taken out. That there were other social 
engineering attacks on the system didn't change the need to fix this 
problem.

Design choices have security consequences. It is not enough to claim 
that your system is secure when used properly or when implemented 
properly. The system must be designed in such a way that it is 
natural to use it properly and it is easy to implement properly. 
Furthermore, failure to do so should be obvious. When a system is 
being used incorrectly, the problem needs to be brutally obvious. In 
Unicode, it is not.

-- 

+-----------------------+------------------------+-------------------+
| Elliotte Rusty Harold | [EMAIL PROTECTED] | Writer/Programmer |
+-----------------------+------------------------+-------------------+
|          The XML Bible, 2nd Edition (Hungry Minds, 2001)           |
|              http://www.ibiblio.org/xml/books/bible2/              |
|   http://www.amazon.com/exec/obidos/ISBN=0764547607/cafeaulaitA/   |
+----------------------------------+---------------------------------+
|  Read Cafe au Lait for Java News:  http://www.cafeaulait.org/      |
|  Read Cafe con Leche for XML News: http://www.ibiblio.org/xml/     |
+----------------------------------+---------------------------------+

Reply via email to