On 01/30/2014 12:21 AM, Moxie Marlinspike wrote: > My intuition is that we just shouldn't be showing the user a fingerprint > at all if even remotely possible (TOFU). If it's necessary to display a > real fingerprint at some point, the user isn't going to have any idea > what's going on, so it probably doesn't matter whether it's a set of > gibberish words, a hex string, or b32 character string.
While i'm not sure TOFU is the only answer, i think i agree with Moxie that asking users to cope with a long string of incomprehensible, high-entropy gibberish is in general a bad idea. I lean toward the idea of mechanized fingerprint transmission via physical channels that humans can intuitively inspect. For humans without visual impairment and with modern computing machinery, i really like QR codes for this use. It's easy to tell whether there is an MITM or not during QR code scanning, they're easy enough to generate, cheap to print, simple to display on most computer displays, and recoverable with all but the lowest-quality webcams. Of course, you can't transmit a QR code over the phone or on a bar napkin, or commit it to memory. But i'd argue that most people can't reliably do any of those things for cryptographically-strong fingerprints in the first place anyway, regardless of encoding. For humans with visual impairment and modern computing machinery, Brian Warner mentioned the idea of acoustic coupling of two devices -- one of them would hum or beep or squawk (all in the range of normal human hearing) to transmit a fingerprint, and another machine could listen and decode the highest-strength signal. But if we set aside mechanical transmission mechanisms like QR codes or acoustic coupling, I think the questions for any such scheme are: 0) how many high-entropy bits of information can the scheme encode? 1) how complicated is it for humans to compare two of these representations and determine whether they are exactly identical? (or, conversely, how easy is it to craft a value that is sufficiently close to appear as a "collision" to a significant fraction of users) 2) how difficult is it for humans to transcribe precisely into their communications equipment when the representation is in front of them? 3) how well does it work in other human-to-human transmission vectors (e.g. over the phone, etc). For computational strength (to be well away from the range of possible preimage attacks) we probably want the identifiers exchanged to be more than 112 bits, probably significantly more than 112 bits. I'm assuming here that for cryptographic strength, we can assume that we only care about preimage or second-preimage attacks against the fingerprint -- if the bitstring being exchanged is actually the user's whole public key in a compact cryptosystem like ECC (instead of just a fingerprint), then we'd want to double the length required. In practice, if the string can't contain enough bits, it's probably actively harmful to user security (consider the 32-bit OpenPGP "short keyIDs", which are trivially spoofable, but many users seem to think can be used as strong identifiers). unfortunately, i don't think you can expect most humans to be able to accurately transcribe or compare more than a few dozen bits of any high-entropy string, no matter how cleverly you present it. Consider the difficulty people have with strong passwords as an example. I'm also unconvinced by schemes like OpenSSH's "randomart" (aka "VisualHostKey"), which produces things like: The key's randomart image is: +--[ RSA 2048]----+ | | | . . | | = | | . + . | | . . S | | . o . | | E . + o | | . *.oB | | .=.*Bo+. | +-----------------+ I don't think this image would be useful at all to most users trying to differentiate it from another randomart that was fuzzily-generated to match the contour roughly. It's also terrible for napkins and the telephone :P This is often cited as being inspired by this paper: http://users.ece.cmu.edu/~adrian/projects/validation/validation.pdf and has had a little bit of analysis here: http://www.dirk-loss.de/sshvis/drunken_bishop.pdf For matching ssh-style fingerprints directly, see "Fuzzy Fingerprints: Attacking Vulnerabilities in the Human Brain": https://www.thc.org/papers/ffp.html None of these papers seems particularly sophisticated to me, but i guess it's possible that there just hasn't been much sophisticated public work in trying to attack any of these schemes. --dkg
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Messaging mailing list [email protected] https://moderncrypto.org/mailman/listinfo/messaging
