Re: Solving password problems one at a time, Re: The password-reset paradox
silky wrote: On Sun, Feb 22, 2009 at 6:33 AM, Ed Gerck edge...@nma.com wrote: (UI in use since 2000, for web access control and authorization) After you enter a usercode in the first screen, you are presented with a second screen to enter your password. The usercode is a mnemonic 6-character code such as HB75RC (randomly generated, you receive from the server upon registration). Your password is freely choosen by you upon registration.That second screen also has something that you and the correct server know but that you did not disclose in the first screen -- we can use a simple three-letter combination ABC, for example. You use this to visually authenticate the server above the SSL layer. A rogue server would not know this combination, which allays spoofing considerations -- if you do not see the correct three-letter combination, do not enter your password. Well, this is an old plan and useless. Because any rogue server can just submit the 'usercode' to the real server, and get the three letters. Common implementations of this use pictures (cats dogs family user-uploaded, whatever). Thanks for the comment. The BofA SiteKey attack you mention does not work for the web access scheme I mentioned because the usercode is private and random with a very large search space, and is always sent after SSL starts (hence, remains private). The attacker has a /negligible/ probability of success in our case, contrary to a case where the user sends the email address to get the three letters -- which is trivial to bypass. http://nma.com/papers/zsentryid-web.pdf (UI in use since 2008, TLS SMTP, aka SMTPS, authentication). The SMTP Username is your email address, while the SMTP Password is obtained by the user writing in sequence the usercode and the password. With TLS SMTP, encryption is on from the start (implict SSL), so that neither the Username or the Password are ever sent in the clear. I have no idea what you're referring to here. It doesn't seem to make sense in the context of the rest of your email. Are you saying your system is useless given SSL? (Aside from the fact that it's useless anyway ...) I'm referring to SMTP authentication with implicit SSL. The same usercode|password combination is used here as well, but the usercode is prepended to the password while the username is the email address. In this case, there is no anti-phishing needed. (UI 2008 version, web access control) Same as the TLS SMTP case, where a three-letter combination is provided for user anti-spoofing verification after the username (email address) is entered. In trust terms, the user does not trust the server with anything but the email address (which is public information) until the server has shown that it can be trusted (to that extent) by replying with the expected three-letter combination. Wrong again, see above. This case has the same BofA SiteKey vulnerability. However, if that is bothersome, the scheme can also send a timed nonce to a cell phone, which is unknown to the attacker. This is explained elsewhere in http://nma.com/papers/zsentryid-web.pdf (there are different solutions for different threat models) In all cases, because the usercode is not controlled by the user and is random, it adds a known and independently generated amount of entropy to the Password. Disregarding all of the above, consider that it may not be random, and given that you can generate them on signup there is the potential to know or learn the RNG a given site is using. If the threat model is that you can learn or know the RNG a given site is using then the answer is to use a hardware RNG. With a six-character (to be within the mnemonic range) usercode, usability considerations (no letter case, no symbols, overload 0 with O, 1 with I, for example), will reduce the entropy that can be added to (say) 35 bits. Considering that the average poor, short password chosen by users has between 20 and 40 bits of entropy, the end result is expected to have from 55 to 75 bits of entropy, which is quite strong. Doesn't really matter given it prevents nothing. Sites may as well just ask for two passwords. The point is that two passwords would still not have an entropy value that you can trust, as it all would depend on user input. This can be made larger by, for example, refusing to accept passwords that are less than 8 characters long, by and adding more characters to the usercode alphabet and/or usercode (a 7-character code can still be mnemonic and human friendly). The fourth problem, and the last important password problem that would still remain, is the vulnerability of password lists themselves, that could be downloaded and cracked given enough time, outside the access protections of online login (three-strikes and you're out). This is also solved in our scheme by using implicit passwords from a digital certificate calculation. There are no username and password lists to
Re: Solving password problems one at a time, Re: The password-reset paradox
James A. Donald wrote: No one is going to check for the correct three letter combination, because it is not part of the work flow, so they will always forget to do it. Humans tend to notice patterns. We easily notice mispelngs. Your experience may be different but we found out in testing that three-letters can be made large enough to become a visually noticeable pattern. Reversing the point, the fact that a user can ignore the three-letters is useful if the user forgets them. The last thing users want is one more hassle. The idea is to give users a way to allay spoofing concerns, if they so want and are motivated to, or learn to be motivated. Mark Twain's cat was afraid of the cold stove. Cheers, Ed Gerck - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Solving password problems one at a time, Re: The password-reset paradox
On Tue, Feb 24, 2009 at 8:30 AM, Ed Gerck edge...@nma.com wrote: [snip] Thanks for the comment. The BofA SiteKey attack you mention does not work for the web access scheme I mentioned because the usercode is private and random with a very large search space, and is always sent after SSL starts (hence, remains private). This is meaningless. What attack is the 'usercode' trying to prevent? You said it's trying to authorise the site to the user. It doesn't do this, because a 3rd party site can take the usercode and send it to the 'real' site. [snip] I'm referring to SMTP authentication with implicit SSL. The same usercode|password combination is used here as well, but the usercode is prepended to the password while the username is the email address. In this case, there is no anti-phishing needed. Eh? This still doesn't make any particular amount of sense. [snip] This case has the same BofA SiteKey vulnerability. However, if that is bothersome, the scheme can also send a timed nonce to a cell phone, which is unknown to the attacker. This is explained elsewhere in http://nma.com/papers/zsentryid-web.pdf Anything you do can be simulated by an evil site. Sending a key to a phone is a good idea, but still, in the end, useless, because the evil site can simulate it by passing whatever requested the user did to that site. [snip] If the threat model is that you can learn or know the RNG a given site is using then the answer is to use a hardware RNG. No, it isn't. The point is that two passwords would still not have an entropy value that you can trust, as it all would depend on user input. *shrug* make one of them autogenerated. Doesn't matter. You're just adding complexity for no real benefit. That data is just a key that is the same for /all/ users. It is not user-specific. its knowledge does not provide information to attack any account. Well I'm sorry but you don't understand your own system then. Obviously it must have information to 'attack' a given account, because you used it to generate something. The function you used did something, so you can repeat it if you have all the inputs. Sorry if it wasn't clear. Please have a second reading. Indeed. Cheers, Ed Gerck -- noon silky http://www.boxofgoodfeelings.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Solving password problems one at a time, Re: The password-reset paradox
silky wrote: On Tue, Feb 24, 2009 at 8:30 AM, Ed Gerck edge...@nma.com wrote: [snip] Thanks for the comment. The BofA SiteKey attack you mention does not work for the web access scheme I mentioned because the usercode is private and random with a very large search space, and is always sent after SSL starts (hence, remains private). This is meaningless. What attack is the 'usercode' trying to prevent? You said it's trying to authorise the site to the user. It doesn't do this, because a 3rd party site can take the usercode and send it to the 'real' site. What usercode? The point you are missing is that there are 2^35 private usercodes and you have no idea which one matches the email address that you want to sent your phishing email to. The other points, including the TLS SMTP login I mentioned, might be clearer with an example. I'll be happy to provide you with a test account. Cheers, Ed Gerck - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Solving password problems one at a time, Re: The password-reset paradox
On Tue, Feb 24, 2009 at 12:23 PM, Ed Gerck edge...@nma.com wrote: [snip] What usercode? The point you are missing is that there are 2^35 private usercodes and you have no idea which one matches the email address that you want to sent your phishing email to. What you're missing is that it doesn't matter. The user enters the usercode! So they enter it into the phishing site which passes the call along. -- noon silky http://www.boxofgoodfeelings.com/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: SHA-3 Round 1: Buffer Overflows
Aloha! Ian G wrote: However I think it is not really efficient at this stage to insist on secure programming for submission implementations. For the simple reason that there are 42 submissions, and 41 of those will be thrown away, more or less. There isn't much point in making the 41 secure; better off to save the energy until the one is found. Then concentrate the energy, no? I would like to humbly disagree. In case of MD6 the fix meant that a bugger had to be doubled in size (according to the Fortify blog). This means that the memory footprint and thus its applicability for embedded platforms was (somewhat) effected. That is, secure implementations might have different requirements than what mighty have been stated, and we want to select an algorithm based on the requirements for a secure implementation, right? -- Med vänlig hälsning, Yours Joachim Strömbergson - Alltid i harmonisk svängning. Kryptoblog - IT-säkerhet på svenska http://www.strombergson.com/kryptoblog - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Crypto Craft Knowledge
On Tue, 17 Feb 2009, James Hughes wrote: I find this conversation off the point. Consider other trades like woodworking. There is no FAQ that can be created that would be applicable to building a picture frame, dining room table or a covered bridge. A FAQ for creating a picture frame would be possible, but this is not the FAQ that is being discussed. You're thinking at the wrong level. There are definitely FAQs that are applicable to building a picture frame, a dining room table, or a covered bridge. Woodworking FAQs can (and do) exist to teach basic skills, like sawing and measuring wood, different ways to join bits of wood together, and what types of join are most appropriate for what type of task. Further, there are discussions about things like load and stress, and what designs and materials are best suited to what applications. The same applies for implementing crypto -- teach the building blocks, issues and judgement points required for good understanding and implementation. Ultimately it doesn't matter if they're building a picture frame, a dining room table, or a covered bridge, as you've put it -- the skills, materials and judgement of what to use where are what matters. cheers! == A cat spends her life conflicted between a deep, passionate and profound desire for fish and an equally deep, passionate and profound desire to avoid getting wet. This is the defining metaphor of my life right now. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Crypto Craft Knowledge
On Sat, 21 Feb 2009, Peter Gutmann wrote: This points out an awkward problem though, that if you're a commercial vendor and you have a customer who wants to do something stupid, you can't afford not to allow this. While my usual response to requests to do things insecurely is If you want to shoot yourself in the foot then use CryptoAPI, I can only do this because I care more about security than money. For any commercial vendor who has to put the money first, this isn't an option. That's not entirely true -- even commercial vendors have things like ongoing support to consider, and some customers just cost more money than they're worth. cheers! == A cat spends her life conflicted between a deep, passionate and profound desire for fish and an equally deep, passionate and profound desire to avoid getting wet. This is the defining metaphor of my life right now. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: SHA-3 Round 1: Buffer Overflows
On Feb 24, 2009, at 6:22 AM, Joachim Strömbergson wrote: Aloha! Ian G wrote: However I think it is not really efficient at this stage to insist on secure programming for submission implementations. For the simple reason that there are 42 submissions, and 41 of those will be thrown away, more or less. There isn't much point in making the 41 secure; better off to save the energy until the one is found. Then concentrate the energy, no? I would like to humbly disagree. In case of MD6 the fix meant that a bugger had to be doubled in size (according to the Fortify blog). This means that the memory footprint and thus its applicability for embedded platforms was (somewhat) effected. That is, secure implementations might have different requirements than what mighty have been stated, and we want to select an algorithm based on the requirements for a secure implementation, right? Two aspects of this conversation. 1) This algorithm is designed to be parallelized. This is a significant feat. C is a language where parallelization is possible, but fraught with peril. We have to look past the buffer overflow to the motivation of the complexity. 2) This algorithm -can- be implemented with a small footprint -if- parallelization is not intended. If this algorithm could not be minimized then this would be a significant issue, but this is not the case. I would love this algorithm to be implemented in an implicitly parallel language like Fortress. http://projectfortress.sun.com/Projects/Community Jim - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
peer review of presentation requested
Hello all, I'm working on a presentation about cryptography to give to the Open Web Application Security Project (OWASP). The reason why I'm giving it is that I've seen web developers doing crypto a lot lately, and they seem to be making some naive mistakes, like using ECB mode for multi-block structures, using encryption when they should be using MACs, and that kind of stuff. I had originally intended to make the entire presentation on web security failures, but found it time-consuming to locate information about web-specific vulnerabilities... they just aren't documented well because they're usually in the application layer for a single company, and so generally not shared widely. So, I've thrown in some non-web examples of application developers trying to invent their own crypto and getting it wrong (LANMAN hashes, for example). Anyway, I'd like some cryptographers to review my presentation to make sure that I am giving solid advice. http://www.subspacefield.org/security/web_20_crypto.pdf In addition, I'm curious about: Which hashes are currently vulnerable to length-extension attacks. If I recall Bruce Schneier's book Practical Cryptography correctly, he stated that even SHA-1 was vulnerable. Do any hashes in the SHA-2 family have protection against length extension? Is it sufficient to have a one-way finalization function in your Merkle-Damgaard hash construction to prevent length extension attacks? -- Obama Nation | It's not like I'm encrypting... it's more like I've developed a massive entropy deficiency | http://www.subsubpacefield.org/~travis/ If you are a spammer, please email j...@subspacefield.org to get blacklisted. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: peer review of presentation requested
Travis travis+ml-cryptogra...@subspacefield.org writes: I'm working on a presentation about cryptography to give to the Open Web Application Security Project (OWASP). [...] In addition, I'm curious about: Which hashes are currently vulnerable to length-extension attacks. If I recall Bruce Schneier's book Practical Cryptography correctly, he stated that even SHA-1 was vulnerable. Do any hashes in the SHA-2 family have protection against length extension? [...] If you expect to be presenting things at that level of detail to developers, you're going to lose. There is no chance that they'll absorb the details, and they'll get entirely the wrong message, which is that they should be making decisions on such things instead of just using prepackaged protocols. The single most important lesson in cryptography is that cryptography and cryptographic protocols are insanely hard to get right. The average PHP hacker is not in a position to spend enough time to learn the field well enough -- he's busy getting his application working. Much better to avoid the entire issue. Perry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com
Re: Security through kittens, was Solving password problems
you enter a usercode in the first screen, you are presented with a second screen to enter your password. The usercode is a mnemonic 6-character code such as HB75RC (randomly generated, you receive from the server upon registration). Your password is freely choosen by you upon registration.That second screen also has something that you and the correct server know but that you did not disclose in the first screen -- This scheme is quite popular with banks. I have at least three accounts where I enter my user name in one screen, then on a second password entry screen it shows me a picture chosen when I set up the account along with a caption I wrote. They have a large library of pictures of cute animals, household appliances, and so forth. Clever though this scheme is, man-in-the middle attacks make it no better than a plain SSL login screen. Since the bad guy knows what site you're trying to reach, he can use your usercode to fetch the shared secret from the real site and present it to you on his fake site. It's true, the fake site won't have the same URL as the real site, but if the security of this scheme still depends on people scrutinizing the browser's address bar to be sure they're visiting the site they think they are, how is this any better than an ordinary kitten-free SSL login screen? Another bank sent me a dongle that generates a timestamped six-digit number that I use as part of the login. Even with the dongle, MITM attacks are still effective. The bad guy can only steal one session rather than a user's permanent credentials, but that's still plenty to, e.g., wire money out of the country. The only thing I've been able to come up with that seems even somewhat secure is a USB dongle that plugs into your computer and can set up an end-to-end encrypted channel with the bank, and that has a screen big enough that once you've set up your transaction in your browser, the bank then sends a description to the dongle to display on its screen, and YES and NO buttons on the dongle itself. Unless the screen and the buttons are physically part of the dongle, you're still subject to MITM attacks. But a dongle with a screen big enough for my 87 year old father to read, and buttons big enough for him to push reliably would be unlikely to fit on his keychain. It's a very hard problem. Regards, John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies, Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor More Wiener schnitzel, please, said Tom, revealingly. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com