Re: dual-use digital signature vulnerability
| the issue in the EU FINREAD scenario was that they needed a way to | distinguish between (random) data that got signed ... that the key owner | never read and the case were the key owner was actually signing to | indicate agreement, approval, and/or authorization. They specified a | FINREAD terminal which supposed met the requirements that the key owner had | to have read and to some extent understood and approved as to the | meaning of the contents of what was being signed. | | However, the FINREAD specification didn't define any mechanism that | provided proof to the relying party that a FINREAD terminal was actually | used as part of the signature process. Fascinating. They are trying to re-create the historical, mainly now lost, role of a notary public. Notary publics came into being at a time when many people were illiterate, and only the wealthy had lawyers. A notary public had two distinct roles: - To ensure that a person signing a notarized document actually understood what he was signing; - To witness that the signer - who might well sign with just an X - is the person whose name appears. The first role is long lost. Notary publics don't look at the material being signed. We lost our trust in a public official explaining contracts - the assumption now is that everyone gets his own lawyer. The second role remained, even with universal literacy. A notary public is supposed to check for some form of good ID - or know the person involved, the traditional means. A traditional notary public, in modern terms, would be a tamper-resistant device which would take as inputs (a) a piece of text; (b) a means for signing (e.g., a hardware token). It would first present the actual text that is being signed to the party attempting to do the signing, in some unambiguous form (e.g., no invisible fonts - it would provide you with a high degree of assurance that you had actually seen every bit of what you were signing). The signing party would indicate assent to what was in the text. The notary might, or might not - depending on the means for signing - then authenticate the signer further. The notary would then pass the text to the means for signing, and verify that what came back was the same text, with an alleged signature attached in a form that could not modify the text. (E.g., if the signature were an actually RSA signature of the text, it would have to decrypt it using the signer's public key. But if the signature were a marked, separate signature on a hash, then there is no reason why the notary has to be able to verify anything about the signature.) Finally, the notary would sign the signed message itself. We tend not to look at protocols like this because we've become very distrustful of any 3rd party. But trusted 3rd parties have always been central to most business transactions, and they can be very difficult to replace effectively or efficiently. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: dual-use digital signature vulnerability
At 08:25 AM 7/19/2004, Jerrold Leichter wrote: A traditional notary public, in modern terms, would be a tamper-resistant device which would take as inputs (a) a piece of text; (b) a means for signing (e.g., a hardware token). It would first present the actual text that is being signed to the party attempting to do the signing, in some unambiguous form (e.g., no invisible fonts - it would provide you with a high degree of assurance that you had actually seen every bit of what you were signing). The signing party would indicate assent to what was in the text. The notary might, or might not - depending on the means for signing - note that some of the online click-thru contracts have been making attempt to address this area; rather than simple i agree/disagree buttons ... they put little checkmarks at places in scrolled form you have to at least scroll thru the document and click on one or more checkmarks before doing the i agree button. a digital signature has somewhat higher integrity than simple clicking on the i agree button ... but wouldn't subsume the efforts to demonstrate that a person was required to make some effort to view document. Of course in various attack scenarios ... simple checkmark clicks could be forged. However, the issue being addressed isn't a forging attack ... it is person repudiating that they read the TCs before hitting the I agree button. With the depreciating of the non-repudiation bits in a long ago, and far away manufactured certificates (which has possibly absolutely no relevance to the conditions under which digital signatures are actually performed) there has been some evolution of non-repudiation processes. An issue for the non-repudiation processes is whether or not the person actually paid attention to what they were signing (regardless of the reason). An issue for relying parties is not only was whether or not there was some non-repudiation process in effect, but also does the relying party have any proof regarding a non-repudiation process. If there is some risk and/or expense associated with repudiation might occur (regardless of whether or not it is a fraud issue), then a relying party might adjust the factors they use for performing some operation (i.e. they might not care as much if it is a low-value withdrawal transaction for $20 than if it was a withdrawal transaction for $1m). some physical contracts are now adding requirement that addition to signing (the last page), that people are also required to initial significant paragraphs at various places in the contract. -- Anne Lynn Wheelerhttp://www.garlic.com/~lynn/ - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
RE: dual-use digital signature vulnerability
About using a signature key to only sign contents presented in a meaningful way that the user supposedly read, and not random challenges: The X.509 PoP (proof-of-possession) doesn't help things out, since a public key certificate is given to a user by the CA only after the user has demonstrated to the CA possession of the corresponding private key by signing a challenge. I suspect most implementation use a random challenge. For things to be clean, the challenge would need to be a content that is readable, and that is clearly only used for proving possession of the private key in order to obtain the corresponding public key certificate. X.509 PoP gets even more twisted when you want to certify encryption keys (I don't know what ietf-pkix finally decided upon for this..., best solution seems to be to encrypt the public key certificate and send that to the user, so the private key is only ever used to decrypt messages...) --Anton - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: dual-use digital signature vulnerability
| note that some of the online click-thru contracts have been making | attempt to address this area; rather than simple i agree/disagree | buttons ... they put little checkmarks at places in scrolled form you | have to at least scroll thru the document and click on one or more | checkmarks before doing the i agree button. a digital signature has | somewhat higher integrity than simple clicking on the i agree button ... | but wouldn't subsume the efforts to demonstrate that a person was required | to make some effort to view document. Of course in various attack scenarios | ... simple checkmark clicks could be forged. However, the issue being | addressed isn't a forging attack ... it is person repudiating that they | read the TCs before hitting the I agree button. ...which makes for an interesting example of thw way in which informal understandings don't necessarily translate well when things are automated. The law school professor of a friend of mine told a story about going to rent an apartment. The landlord was very surprised to watch him sign it with only a glance - not only was this guy a law professor, but he had done a stint as a Housing Court judge. Aren't you going to read it before signing? No - it's not enforceable anyway. (This is why there have been cases of landlords who refused to rent to lawyers - a refusal that was upheld!) If you are offered a pre-drafted contract on a take-it-or-leave it basis - the technical name is an adhesion contract, I believe - and you really need whatever is being contracted for, you generally *don't* want to read the thing too closely When you buy a house these days, at least some lawyers will have you initial every page of the agreement. Not that there is anything in there you want to read too closely either. (The standard terms for the purchase of a house in Connecticut have you agree not to use or store gasoline on the property. I pointed out to my lawyer - who had actually been on the committee that last reviewed the standard form - that as written this meant I couldn't drive my car into the garage, or even the driveway. His basic response was Don't worry about it.) The black-and-white of a written contract makes things appear much more formal and well-defined than they actually are. The real world rests on many unwritten, even unspoken, assumptions and ways of doing business. It's just the way people are built. When digital technologies only *seem* to match existing mechanisms, all kinds of problems arise. Despite such sayings as You can't tell a book by its cover, we trust others based on appearances all the time. Twenty years ago, if a company had printed letterhead with a nice logo, you'd trust them to be for real. Every once in a while, a con man could abuse this trust - but it was an expensive undertaking, and most people weren't really likely to ever see such an attack. Today, a letterhead or a nice business card mean nothing - even when they are on paper, as opposed to being just bits. It's really, really difficult to come up with formal, mechanized equivalents of these informal, intuitive mechanisms. -- Jerry - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Using crypto against Phishing, Spoofing and Spamming...
In message [EMAIL PROTECTED], Ian Grigg writes: Don't be silly. It's not a threat because people generally use SSL. Back in the old days, password capture was a very serious threat. It went away with SSH. It seems to me quite likely that it would be a problem with web browsing in the absence of SSL. Right... It's easy to claim that it went away because we protected against it. Unfortunately, that's just a claim - there is no evidence of that. This is why I ask whether there has been any evidence of MITMs, and listening attacks. We know for example that there were password sniffing attacks back in the old days, by hackers. Hence SSH. Costs - Solution. But, there is precious little to suggest that credit cards would be sniffed - I've heard one isolated and unconfirmable case. And, there is similar levels of MITM evidence - anecdotes and some experiences in other fields, as reported here on this list. I think that Eric is 100% correct here: it doesn't happen because it's a low-probability attack, because most sites do use SSL. I think that people are forgetting just how serious the password capture attacks were in 1993-94. The eavesdropping machines were on backbones of major ISPs; a *lot* of passwords were captured. Furthermore, the technology has improved -- have you looked at dsniff lately, with the ARP-based active attack capability? And credit cards are much easier to grab -- they're probably sent in one packet, instead of several, and the number is a self-checking string of digits. It's also worth remembering that an SSL-like solution -- cryptographically protecting the transmission of credit card number, instead of digitally signing a funds transfer authorization linked to some account -- was more or less the only thing possible at the time. The Internet as a medium of commerce was too new for the banks to have developed something SET-like, and there wasn't an overwhelmingly-dominant client platform at the time for which custom software could be developed. (Remember that Windows 95 was the first version with an integral TCP/IP stack.) *All* that Netscape could deploy was something that lived in just the browser and Web server. SET itself failed because the incentives were never there -- consumers didn't perceive any benefit to installing funky software, and merchants weren't given much incentive to encourage it. --Steve Bellovin, http://www.research.att.com/~smb - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: New Attack on Secure Browsing
On 15 Jul 2004, at 9:36 PM, Aram Perez wrote: I'm not sure if PGP deliberately set out to confuse naïve users since their logo has been the padlock for a while. Many web sites have their logo displayed on the address bar (and tab) when you go to there site, see http://www.yahoo.com or http://www.google.com. Maybe Jon can answer the question. (Sent from this account, since I am subscribed from here.) This is a favicon -- a logo icon for the site. Lots of sites use them. PGP has had this on our for a couple of years, now. I vaguely remember there being one in The Dark Days, but I could be misremembering. This is the first bit of confusion I've heard about it. PGP's logo icon has been a padlock at least since the O'Reilly book used it in January of '95. This is before there even was an SSL. That particular icon is the very same one that was used as the tray icon in some version of PGP or other (we think PGP 7). We're giving this all due consideration. Would it help if we changed the metal, perhaps from the current four-plane brass to eight-plane steel or even to alpha-channel Jolly Rancher iridescent translucent anodized titanium? Jon - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
Re: Using crypto against Phishing, Spoofing and Spamming...
Steve, thanks for addressing the issues with some actual anecdotal evidence. The conclusions still don't hold, IMHO. Steven M. Bellovin wrote: In message [EMAIL PROTECTED], Ian Grigg writes: Right... It's easy to claim that it went away because we protected against it. Unfortunately, that's just a claim - there is no evidence of that. This is why I ask whether there has been any evidence of MITMs, and listening attacks. We know for example that there were password sniffing attacks back in the old days, by hackers. Hence SSH. Costs - Solution. But, there is precious little to suggest that credit cards would be sniffed - I've heard one isolated and unconfirmable case. And, there is similar levels of MITM evidence - anecdotes and some experiences in other fields, as reported here on this list. I think that Eric is 100% correct here: it doesn't happen because it's a low-probability attack, because most sites do use SSL. The trick is to show cause and effect. We know the effect and we know the cause(s). The question is, how are they related? The reason it is important is that we may misapply one cause if the effect results from some other cause. I think that people are forgetting just how serious the password capture attacks were in 1993-94. The eavesdropping machines were on backbones of major ISPs; a *lot* of passwords were captured. Which led to SSH, presumably, and was pre-credit card days, so can only be used as a prediction of eavesdropping. Question - are we facing a situation today whereby it is easy to eavesdrop from the backbone of a major ISP and capture a lot of traffic? As far as I can see, that's not likely to happen, but it could happen. Secondly, who were the people doing those attacks? Back in 93-94, I'd postulate they weren't criminal types, but hacker types. That is, they were hackers looking for machines. Those people are still around - defeated by SSH in large measure - and use other techniques now. (Hackers had no liability in those days. Criminals do have liability, and are more concerned to cover their tracks. This makes active attacks less useful to them. Criminals are getting braver though.) Thirdly, why aren't we seeing more reports of this on 802.11b networks? I've seen a few, but in each case, the attack has been to hack into some machine. I've yet to see a case where listeners have scarfed up some free email account passwords, although I suppose that this must happen. The point of all this is that we need to establish how frequent and risky these things are. Back in the pre- commerce days, a certain amount of FUD was to be expected. Now however, it's been a decade - whether that FUD was warranted then is an issue for the historians, but now we should be able to scientifically make a case that the posture matches the threats. Because it's been a decade (almost). As far as I can see, there *some* justification for expecting eavesdropping attacks to credit cards. There is a lot more justification with unprotected non-commerce. And in contrast, there is little justification for expecting active attacks for purposes of theft. What this leads to is not whether SSL should have been deployed or changed in its current form (it is fruitless to debate that, IMHO, except in order to lay down the facts) but a discussion of certificates. There seems some justification in suggesting that SSL be (continued to be) deployed in any form. Mostly, IMHO, in areas outside commerce, and mostly, in the future, not now. There seems a lot of justfication for utilising certs as they enable relationship-protection. There seems quite a bit of justification for utilising CA-signed certs because they permit more advanced relationship protection such as Amir's logo ideas and my branding ideas, and more so every day. What there doesn't appear to be any justification for is the effective or defacto mandating of CA-signed certs. And there appears to be a quite serious cost involved in that mandating - the loss of protection from the resultant *very* low levels of SSL deployment. This all hangs on the MITM - hence the question of frequency. It seems to be very low, an extraordinarily desparate attack for a criminal, especially in the light of experience. He does phishing and hacking with ease, but he doesn't like leaving tracks in the infrastructure that point back to him. If the MITM cannot be justified as an ever-present danger, then there is no justification for the defacto mandating of CA-signed certs. Permitting and encouraging self-signed certs would then make deployment of SSL much easier, and thus increase use of SSL - in my view, dramatically - which would lead to much better protection. (Primarily by relationship management on the client side, and also by branding/logo management with the CAs, but that needs to be enabled in code first at the browsers.) (It has to be said that encouraging anon-diffie-hellman SSL would also lead to dramatically improved levels of SSL
Re: Using crypto against Phishing, Spoofing and Spamming...
At 01:54 PM 7/19/2004, Steven M. Bellovin wrote: It's also worth remembering that an SSL-like solution -- cryptographically protecting the transmission of credit card number, instead of digitally signing a funds transfer authorization linked to some account -- was more or less the only thing possible at the time. The Internet as a medium of commerce was too new for the banks to have developed something SET-like, and there wasn't an overwhelmingly-dominant client platform at the time for which custom software could be developed. (Remember that Windows 95 was the first version with an integral TCP/IP stack.) *All* that Netscape could deploy was something that lived in just the browser and Web server. SET itself failed because the incentives were never there -- consumers didn't perceive any benefit to installing funky software, and merchants weren't given much incentive to encourage it. SET couldn't replace online transaction ... the encryption was effectively there for hiding credit card while in-flight ... which SSL was already doing ... but SET was doing at an order to two-orders increase in complexity and overhead. SET didn't provide any additional countermeasure against the major exploits/vulnerabilities (vis-a-vis SSL) ... even with all that complexity. the transaction was still online ... since there are a bunch of other factors involved in authorization ... like credit limit ... not just whether there is impersonation with lost/stoleln numbers. there was still the enormous payload bloat (certificates and signatures increase the size of typical 8583 transaction by two-orders of magnitude) which prevent true end-to-end security operation. As a result the signature was verified at some internet boundary, then the signature and certificate(s) were stripped off and traditional 8583 packet forwarded to the consumer/issuing financial institution. Later at some ISO standards meeting, one of the association business people presented numbers on number of 8583 packets with the signature bit turned on and they positively knew no digital signature was involved. It wasn't even a real PKI ... 1) i.e. the x.509 identity certificates from the early 90s had been depreciated because of the privacy and liability issues ... and the certificates effectively were issuing relying-party-only certificates with the account number and public key. 2) there was no revocation and/or other types of process (which could be considered minimum requirement for a PKI operation) ... they were simply manufactored certificates (a term we coined to describe the SET and SSL infrastructure; contrasting it to PKI). SET specifically stated that the transaction would be online and rely on the existing online infrastructure for determining lost, stolen, revoked, canceled, etc ... as well as all the other stuff an online infrastructure can do with timely and aggregated information (like credit limit) 3) it is trivial to show that for relying-party-only certificates requiring online infrastructure ... that the certificates themselves are redundant and superfluous ... aka the key is registered with the issuing party ... and the transaction is performed by the issuing party. The transaction can be digitally signed (w/o the enormous payload bloat of carrying a certificate) and the issuing party verify the digital signature with an onfile public key w/o having to resort to dealing with a certificate (that the issuing party would have originally generated from the onfile information). From an incentive standpoint the PKI model is effectively orthogonal to standard business processes. The key owner pays something to the issuing party (or at best, the issuing party absorbs the costs). The standard business process has any sort of contract between the key owner and the issuing party. This totally leaves out the relying-party ... which is the primary beneficiary of the PKI model from being a part of the contractual business process ... which would imply little or no legal recourse if something went wrong. GAO has created a facade to address this issue by making the TTP certification authorities sort of agents of the GAO ... and having all relying-parties signed contracts with the GAO., The PKI frequently creates a total disconnect between the parties of the certification contract ... and the relying parties ... which should have recourse in case something went wrong aren't even a part of it. In the specifics of the SET deployment ... the primary potential beneficiaries theoritically were the merchants (from the thoery that SET signed transactions would be considered card-present card-owner present ... and lower the merchants cost for doing the transactions). However the parties paying for the certificates and most of the infrastructure were the issuers and the consumers. Not only may a traditional TTP PKI create legal disconnect for relying parties but in the SET case there was major disconnect between who paid for most of the infrastructure