> in a world where there are repeated human mistakes/failures .... > at some point it is recognized that people aren't perfect and the design > is changed to accommodate peoples foibles. in some respects that is what > helmets, seat belts, and air bags have been about.
The problem is here, we are blaming the protective device for not being able to protect against the deliberate use of an attack that bypasses, not challenges it - by exploiting the gullibility or tendency to take the path of least resistance of the user. The real weakness in HTTPS is the tendency of certificates signed by Big Name CAs to be automagically trusted - even if you have never visited that site before. yes, you can fix this almost immediately by untrusting the root certificate - but then you have to manually verify each and every site at least once, and possibly every time if you don't mark the cert as "trusted" for future reference. To blame HTTPS for an attack where the user fills in a web form received via html-rendering email (no https involved at all) is more than a little unfair though. > in the past systems have designed long, complicated passwords that are > hard to remember and must be changed every month. that almost worked when > a person had to deal with a single shared-secret. > when it became a fact of life that a person might have tens of such > different interfaces it became impossible. It wasn't the fault of any > specific institution, it was a failure of humans being able to deal with > large numbers of extremely complex, frequently changing passwords. > Because of known human foibles, it might be a good idea to start shifting > from an infrastructure with large numbers of shared-secrets to a > non-shared-secret paradigm. I am not aware of one (not that that means much, given I am a novice in this field) Even PKI relies on something close to a shared secret - a *trustworthy* copy of the public key, matching a secret copy of the private key. In x509, this trustworthyness is established by an Ultimately Trusted CA; in pgp, by the Web Of Trust, in a chain leading back to your own key; in SSH, by your placing of the public key into your home dir manually (using some other form of authentication to presumably gain access) in each of these cases, the private key will almost invariably be protected by a passphrase; at best, you can have a single passphrase (or even single private key) to cover all bases.. but that just makes that secret all the more valuable. > at a recent cybersecurity conference, somebody made the statement that (of > the current outsider, internet exploits, approximately 1/3rd are buffer > overflows, 1/3rd are network traffic containing virus that infects a > machine because of automatic scripting, and 1/3 are social engineering > (convince somebody to divulge information). As far as I know, evesdropping > on network traffic doesn't even show as a blip on the radar screen. That is pretty much because defence occupies the position of the interior - attackers will almost invariably attack weak points, not strong ones. It is easy to log and calculate how many attacks happen on weak points, but impossible to calculate how many attacks *would* have happened had the system not been in place to protect against such attacks, so the attackers moved onto easier targets. It makes little sense to try and break one https connection (even at 40 bit) if by breaking into the server you get that information, hundreds of others (until discovered) and possibly thousands of others inadvisedly stored unprotected in a database. <snip> > The types of social engineering attacks then become convincing people to > insert their hardware token and do really questionable things or mailing > somebody their existing hardware token along with the valid pin (possibly > as part of an exchange for replacement). The cost/benefit ratio does start > to change since there is now much more work on the crooks part for the > same or less gain. One could also claim that such activities are just part > of child-proofing the environment (even for adults). On the other hand, it > could be taken as analogous to designing systems to handle observed > failure modes (even when the failures are human and not hardware or > software). Misc. identify theft and credit card fraud reference: Which again matches well to the Nigerian analogy. Everyone *knows* that handing over your bank details is a Bad Thing - yet they still do it. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]