Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-08-03 Thread Aram Perez
Hi Adam,

> From: Adam Back <[EMAIL PROTECTED]>
> Date: Fri, 30 Jul 2004 17:54:56 -0400
> To: Aram Perez <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED], Cryptography <[EMAIL PROTECTED]>, Adam
> Back <[EMAIL PROTECTED]>
> Subject: Re: should you trust CAs? (Re: dual-use digital signature
> vulnerability)
> 
> On Wed, Jul 28, 2004 at 10:00:01PM -0700, Aram Perez wrote:
>> As far as I know, there is nothing in any standard or "good security
>> practice" that says you can't multiple certificate for the same email
>> address. If I'm willing to pay each time, Verisign will gladly issue me a
>> certificate with my email, I can revoke it, and then pay for another
>> certificate with the same email. I can repeat this until I'm bankrupt and
>> Verisign will gladly accept my money.
> 
> Yes but if you compare this with the CA having the private key, you
> are going to notice that you revoked and issued a new key; also the CA
> will have your revocation log to use in their defense.
> 
> At minimum it is detectable by savy users who may notice that eg the
> fingerprint for the key they have doesn't match with what someone else
> had thought was their key.
> 
>> I agree with Michael H. If you trust the CA to issue a cert, it's
>> not that much more to trust them with generating the key pair.
> 
> Its a big deal to let the CA generate your key pair.  Key pairs should
> be generated by the user.

>From a purely (and possibly dogmatic) cryptographic point of view, yes, key
pairs should be generated by the user. But in the real world, as Ian G
points out, where businesses are trying to minimize costs and maximize
profits, it is very attractive to have the CA generate the key pair (and as
Peter G pointed, delivers the pair securely), and issue a certificate at the
same time. I hope you are not using a DOCSIS cable modem to connect to the
Internet, because that is precisely what happened with the cable modem. A
major well-known CA generated the key pair, issued the certificate and
securely delivered them to the modem manufacturer. The modem manufacturer
then injected the key pair and certificate into the modem and sold it. I
guess you can say/argue that there is a difference between a "user key pair"
and a "device key pair", and therefore, it can work for cable modems, but I
don't how you feel/think/believe in this case.

Until fairly recently, when smart card could finally generate their own key
pairs, smart cards were delivered with key pairs that were generated outside
the smart card and then injected into them for delivery to the end user.

I'm not trying to change your mind, I'm just trying to point out how the
real business world works, whether we security folks like it or not.

Respectfully,
Aram Perez

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-08-01 Thread Peter Gutmann
Aram Perez <[EMAIL PROTECTED]> writes:

>I agree with Michael H. If you trust the CA to issue a cert, it's not that
>much more to trust them with generating the key pair.

Trusting them to safely communicate the key pair to you once they've generated
it is left as an exercise for the reader :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-08-01 Thread David Honig
At 02:09 PM 7/28/04 -0400, Adam Back wrote:
>The difference is if the CA does not generate private keys, there
>should be only one certificate per email address, so if two are
>discovered in the wild the user has a transferable proof that the CA
>is up-to-no-good.  Ie the difference is it is detectable and provable.

Who cares?  A CA is not legally liable for anything they
sign.  A govt is not liable for a false ID they issue
a protected witness.  The emperor has no clothes, just
a reputation, unchallenged, ergo vapor.




=
36 Laurelwood Dr
Irvine CA 92620-1299

VOX: (714) 544-9727 (home) mnemonic: P1G JIG WRAP
VOX: (949) 462-6726 (work -don't leave msgs, I can't pick them up)
   mnemonic: WIZ GOB MRAM
ICBM: -117.7621, 33.7275
HTTP: http://68.5.216.23:81 (back up, but not 99.999% reliable)
PGP PUBLIC KEY: by arrangement

Send plain ASCII text not HTML lest ye be misquoted

--

"Don't 'sir' me, young man, you have no idea who you're dealing with"
Tommy Lee Jones, MIB

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-08-01 Thread Adam Back
On Wed, Jul 28, 2004 at 10:00:01PM -0700, Aram Perez wrote:
> As far as I know, there is nothing in any standard or "good security
> practice" that says you can't multiple certificate for the same email
> address. If I'm willing to pay each time, Verisign will gladly issue me a
> certificate with my email, I can revoke it, and then pay for another
> certificate with the same email. I can repeat this until I'm bankrupt and
> Verisign will gladly accept my money.

Yes but if you compare this with the CA having the private key, you
are going to notice that you revoked and issued a new key; also the CA
will have your revocation log to use in their defense.

At minimum it is detectable by savy users who may notice that eg the
fingerprint for the key they have doesn't match with what someone else
had thought was their key.

> I agree with Michael H. If you trust the CA to issue a cert, it's
> not that much more to trust them with generating the key pair.

Its a big deal to let the CA generate your key pair.  Key pairs should
be generated by the user.

Adam

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-07-30 Thread Aram Perez
Hi Adam,

> The difference is if the CA does not generate private keys, there
> should be only one certificate per email address, so if two are
> discovered in the wild the user has a transferable proof that the CA
> is up-to-no-good.  Ie the difference is it is detectable and provable.

As far as I know, there is nothing in any standard or "good security
practice" that says you can't multiple certificate for the same email
address. If I'm willing to pay each time, Verisign will gladly issue me a
certificate with my email, I can revoke it, and then pay for another
certificate with the same email. I can repeat this until I'm bankrupt and
Verisign will gladly accept my money.

I agree with Michael H. If you trust the CA to issue a cert, it's not that
much more to trust them with generating the key pair.

Respectfully,
Aram Perez

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-07-28 Thread Anne & Lynn Wheeler
At 12:09 PM 7/28/2004, Adam Back wrote:
The difference is if the CA does not generate private keys, there
should be only one certificate per email address, so if two are
discovered in the wild the user has a transferable proof that the CA
is up-to-no-good.  Ie the difference is it is detectable and provable.
If the CA in normal operation generates and keeps (or claims to
delete) the user private key, then CA misbehavior is _undetectable_.
Anyway if you take the WoT view, anyone who may have a conflict of
interest with the CA, or if the CA or it's employees or CPS is of
dubious quality; or who may be a target of CA cooperation with law
enforcement, secrete service etc would be crazy to rely on a CA.  WoT
is the answer so that the trust maps directly to the real world trust.
(Outsourcing trust management seems like a dubious practice, which in
my view is for example why banks do their own security,
thank-you-very-much, and don't use 3rd party CA services).
In this view you use the CA as another link in the WoT but if you have
high security requirements you do not rely much on the CA link.
in the case of SSL domain name certificates ... it may just mean that 
somebody has been able to hijack the domain name ... and produce enuf 
material that convinces the CA to issue a certificate for that domain name. 
recent thread in sci.crypt
http://www.garlic.com/~lynn/2004h.html#28  Convince me that SSL 
certificates are not a big scam

the common verification used for email address certificates (by 
certification authorities) ... is to send something to that email address 
with some sort of "secret" instructions. so the threat model is some sort 
of attack on email from the CA ... snarf the user's ISP/webmail password 
and intercept the CA verification email.  (it simply falls within all the 
various forms of identity theft ... and probably significantly simpler than 
getting a fraudulent driver's license). with the defense that it is 
possibly another form of identity theft  say you ever actually stumbled 
across such a fraudulently issued certificate  it would probably be 
difficult to prove whether or not the certification authority was actually 
involved in any collusion. even discounting that there is no inter-CA 
certificate duplicate issuing verification  there are enuf failure 
scenarios for public/private keys  that somebody could even convince 
the same CA to issue a new certificate for the same email address (even 
assuming that they bothered to check)

-
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


should you trust CAs? (Re: dual-use digital signature vulnerability)

2004-07-28 Thread Adam Back
The difference is if the CA does not generate private keys, there
should be only one certificate per email address, so if two are
discovered in the wild the user has a transferable proof that the CA
is up-to-no-good.  Ie the difference is it is detectable and provable.

If the CA in normal operation generates and keeps (or claims to
delete) the user private key, then CA misbehavior is _undetectable_.

Anyway if you take the WoT view, anyone who may have a conflict of
interest with the CA, or if the CA or it's employees or CPS is of
dubious quality; or who may be a target of CA cooperation with law
enforcement, secrete service etc would be crazy to rely on a CA.  WoT
is the answer so that the trust maps directly to the real world trust.
(Outsourcing trust management seems like a dubious practice, which in
my view is for example why banks do their own security,
thank-you-very-much, and don't use 3rd party CA services).

In this view you use the CA as another link in the WoT but if you have
high security requirements you do not rely much on the CA link.

Adam

On Wed, Jul 28, 2004 at 11:15:16AM -0400, [EMAIL PROTECTED] wrote:
> I would like to point out that whether or not a CA actually has the
> private key is largely immaterial because it always _can_ have the
> private key - a CA can always create a certificate for Alice whether or
> not Alice provided a public key.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-28 Thread Sean Smith
For what it's worth, last week, I had the chance to eat dinner with 
Carlisle Adams (author of the PoP RFC), and he commented that he didn't 
know of any CA that did PoP any other way than have the client sign 
part of a CRM.

Clearly, this seems to contradict Peter's experience.
I'd REALLY love to see some real numbers here---how many CAs (over how 
many users) do PoP a sane way; how many do it a silly way;  what 
applications people use their keys for, etc.

--Sean
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-22 Thread Anne & Lynn Wheeler
At 05:37 AM 7/22/2004, Amir Herzberg wrote:
Most (secure) signature schemes actually include the randomization as part 
of their process, so adding nonces to the text before signing is not 
necessary. OTOH, I don't see any problem in defining between the parties 
(in the `meta-contract` defining their use of public key signatures) that 
the signed documents are structured with a random field before and after 
the `actual contract`, as long as the fields are well defined.

there has been some claim that large random nonces as part of message ... 
before hashing and signing is characteristic of RSA signatures. one of the 
issues with DSA and hardware tokens in the 90s was that none of the 
hardware tokens had reliable random generators. If you were doing DSA (or 
ECDSA) infrastructure ... then integrity was dependent on quality random 
generator as part of the signature process (to preserve the integrity of 
the private key). In some sense, large random nonces (as part of the 
content to be signed) was shifted to the party(s) generating the message as 
part of the RSA process. In theory, DSA/ECDSA eliminates that requirement 
... especially as you move in to the late 90s where hardware tokens started 
to appear that had quality random generators.

protocols have had severs contributing unique values in signed messages 
 in things like authentication protocol  as countermeasure to 
replay attacks (on the server).

protocols have had clients contributing some values in signed messages ... 
in an authentication protocol ... as countermeasure of server attacks on 
clients. It isn't necessary that the client contributed part has to bracket 
both the start and end of the message  in a digital signature 
environment ... since the digital signature protects the whole message 
integrity. The client contributed part could be simple readable text 
disclaimer ... comparable to some disclaimers you see at the bottom of 
emails (especially from lawyers, doctors, and/or people that work for such 
firms  you even see it in various mailing lists by people that work for 
the big accounting firms).

sometimes the recommendations are that both server and client contribute 
something unique to the signed message ... as generic countermeasures ... 
regardless of whether the situation is actually vulnerable to the 
associated attacks. In general, where the server incurs some expense and/or 
liability associated with every message ... the server (or relying-party) 
is probably interested in countermeasures against replay attacks.

one of the requirements given x9a10 working group (for the x9.59 protocol) 
... was to be able to perform the operation in a single round-trip  w/o 
any sort of protocol chatter. this is comparable to existing electronic 
payment business process. the countermeasure that the infrastructure uses 
for replay attacks is to have the transactions time-stamped and log is 
kept. transactions with time-stamps that predate the log cut-off are deemed 
invalid. In the x9.59 transaction scenario ... the signing entity (in 
theory) specifically approved every transactions and used ecdsa signature. 
the ecdsa signature would preserve the integrity of the transaction. the 
time-stamp in the transaction would indicate whether it was within the 
current active log window of the payment processor, and the randomness of 
the ecdsa signature would provide uniqueness (two transactions that were 
otherwise identical (in amount, time, etc) would be unique if they had 
different ecdsa signatures (effectively provided by the definition of dsa & 
ecdsa).

the addition of ecdsa signature to existing payment transaction  
exactly preserved all the existing business processes and flows ... 
including the requirement that the client can originate the transaction and 
the message flow could complete in a single round-trip.

the addition of the ecdsa signature added
a) integrity of the transaction message,
b) authenticated the origin, and
c) provided transaction uniqueness.
no (public key) certificate was required since the transaction was being 
processed by the relying-party (which in the SET model was also the 
relying-party, had the public key on file, had the original of all the 
information that went into a SET relying-party-only certificate, and the 
only function that the SET relying-party-only certificate was to repeatedly 
travel from the client to the relying party increasing the payload and the 
bandwidth requirements by a factor of one hundred times, carrying static, 
trivial subset of information to the relying party ... which the relying 
party already had ... making it redundant and superfluous ... other than 
contributing enormous payload bloat).

there was one additional thing that was specified in x9.59 standard  
that account numbers used in x9.59 transactions could not be used in 
non-authenticated transactions (not that all the payment processors already 
supported feature/function of ma

Re: dual-use digital signature vulnerability

2004-07-22 Thread Amir Herzberg
Barney Wolff wrote:
Pardon a naive question, but shouldn't the signing algorithm allow the
signer to add two nonces before and after the thing to be signed, and
make the nonces part of the signature?  That would eliminate the risk
of ever signing something exactly chosen by an attacker, or at least
so it would seem.
Most (secure) signature schemes actually include the randomization as 
part of their process, so adding nonces to the text before signing is 
not necessary. OTOH, I don't see any problem in defining between the 
parties (in the `meta-contract` defining their use of public key 
signatures) that the signed documents are structured with a random field 
before and after the `actual contract`, as long as the fields are well 
defined.
--
Best regards,

Amir Herzberg
Associate Professor, Computer Science Dept., Bar Ilan University
http://amirherzberg.com (information and lectures in cryptography & 
security)
Mirror site: http://www.mfn.org/~herzbea/
begin:vcard
fn:Amir  Herzberg
n:Herzberg;Amir 
org:Bar Ilan University;Computer Science
adr:;;;Ramat Gan ;;52900;Israel
email;internet:[EMAIL PROTECTED]
title:Associate Professor
tel;work:+972-3-531-8863
tel;fax:+972-3-531-8863
x-mozilla-html:FALSE
url:http://AmirHerzberg.com , mirror: http://www.mfn.org/~herzbea/ 
version:2.1
end:vcard



Re: dual-use digital signature vulnerability

2004-07-22 Thread Rich Salz
> attempt to address this area; rather than simple "i agree"/"disagree"
> buttons ... they put little checkmarks at places in scrolled form  you
> have to at least scroll thru the document and click on one or more
> checkmarks  before doing the "i agree" button. a digital signature has
> somewhat higher integrity than simple clicking on the "i agree" button ...

See US patent 5,995,625. The abstract:
A method of unwrapping wrapped digital data that is unusable
while wrapped, includes obtaining an acceptance phrase from a
user; deriving a cryptographic key from the acceptance phrase;
and unwrapping the package of digital data using the derived
cryptographic key. The acceptance phrase is a phrase entered
by a user in response to information provided to the user. The
information and the acceptance phrase can be in any appropriate
language. The digital data includes, alone or in combination, any
of: software, a cryptographic key, an identifying certificate,
an authorizing certificate, a data element or field of an
identifying or authorizing certificate, a data file representing
an images, data representing text, numbers, audio, and video.

--
Rich Salz  Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway  http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Jerrold Leichter
| note that some of the online click-thru "contracts" have been making
| attempt to address this area; rather than simple "i agree"/"disagree"
| buttons ... they put little checkmarks at places in scrolled form  you
| have to at least scroll thru the document and click on one or more
| checkmarks  before doing the "i agree" button. a digital signature has
| somewhat higher integrity than simple clicking on the "i agree" button ...
| but wouldn't subsume the efforts to demonstrate that a person was required
| to make some effort to view document. Of course in various attack scenarios
| ... simple checkmark clicks could be forged. However, the issue being
| addressed isn't a forging attack ... it is person repudiating that they
| read the T&Cs before hitting the "I agree" button.
...which makes for an interesting example of thw way in which informal
understandings don't necessarily translate well when things are automated.

The law school professor of a friend of mine told a story about going to rent
an apartment.  The landlord was very surprised to watch him sign it with only
a glance - not only was this guy a law professor, but he had done a stint as a
Housing Court judge.  "Aren't you going to read it before signing?"  "No -
it's not enforceable anyway."  (This is why there have been cases of landlords
who refused to rent to lawyers - a refusal that was upheld!)

If you are offered a pre-drafted contract on a take-it-or-leave it basis -
the technical name is an "adhesion contract", I believe - and you really need
whatever is being contracted for, you generally *don't* want to read the
thing too closely

When you buy a house these days, at least some lawyers will have you initial
every page of the agreement.  Not that there is anything in there you want to
read too closely either.  (The standard terms for the purchase of a house in
Connecticut have you agree not to "use or store" gasoline on the property.  I
pointed out to my lawyer - who had actually been on the committee that last
reviewed the standard form - that as written this meant I couldn't drive my
car into the garage, or even the driveway.  His basic response was "Don't
worry about it.")

The black-and-white of a written contract makes things appear much more
formal and well-defined than they actually are.  The real world rests on many
unwritten, even unspoken, assumptions and "ways of doing business".  It's
just the way people are built.  When digital technologies only *seem* to match
existing mechanisms, all kinds of problems arise.  Despite such sayings as
"You can't tell a book by its cover", we trust others based on appearances
all the time.  Twenty years ago, if a company had printed letterhead with a
nice logo, you'd trust them to be "for real".  Every once in a while, a con
man could abuse this trust - but it was an expensive undertaking, and most
people weren't really likely to ever see such an attack.

Today, a letterhead or a nice business card mean nothing - even when they are
on paper, as opposed to being "just bits".  It's really, really difficult to
come up with formal, mechanized equivalents of these informal, intuitive
mechanisms.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: dual-use digital signature vulnerability

2004-07-21 Thread Anton Stiglic

About using a signature key to only sign contents presented in a meaningful
way that the user supposedly read, and not random challenges:

The X.509 PoP (proof-of-possession) doesn't help things out, since a public
key certificate is given to a user by the CA only after the user has
demonstrated to the CA possession of the corresponding private key by
signing a challenge.  I suspect most implementation use a random challenge.
For things to be clean, the challenge would need to be a content that is
readable, and that is clearly only used for proving possession of the
private key in order to obtain the corresponding public key certificate.

X.509 PoP gets even more twisted when you want to certify encryption keys (I
don't know what ietf-pkix finally decided upon for this..., best solution
seems to be to encrypt the public key certificate and send that to the user,
so the private key is only ever used to decrypt messages...)


--Anton


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Anne & Lynn Wheeler
At 08:25 AM 7/19/2004, Jerrold Leichter wrote:
A traditional "notary public", in modern terms, would be a tamper-resistant
device which would take as inputs (a) a piece of text; (b) a means for
signing (e.g., a hardware token).  It would first present the actual text
that is being signed to the party attempting to do the signing, in some
unambiguous form (e.g., no invisible fonts - it would provide you with a
high degree of assurance that you had actually seen every bit of what you
were signing).  The signing party would indicate assent to what was in the
text.  The notary might, or might not - depending on the "means for 
signing" -

note that some of the online click-thru "contracts" have been making 
attempt to address this area; rather than simple "i agree"/"disagree" 
buttons ... they put little checkmarks at places in scrolled form  you 
have to at least scroll thru the document and click on one or more 
checkmarks  before doing the "i agree" button. a digital signature has 
somewhat higher integrity than simple clicking on the "i agree" button ... 
but wouldn't subsume the efforts to demonstrate that a person was required 
to make some effort to view document. Of course in various attack scenarios 
... simple checkmark clicks could be forged. However, the issue being 
addressed isn't a forging attack ... it is person repudiating that they 
read the T&Cs before hitting the "I agree" button.

With the depreciating of the "non-repudiation" bits in a long ago, and far 
away manufactured certificates (which has possibly absolutely no relevance 
to the conditions under which digital signatures are actually performed) 
 there has been some evolution of "non-repudiation" processes. An issue 
for the "non-repudiation" processes is whether or not the person actually 
paid attention to what they were "signing" (regardless of the reason).

An issue for relying parties is not only was whether or not there was some 
non-repudiation process in effect, but also does the relying party have any 
proof regarding a non-repudiation process. If there is some risk and/or 
expense associated with repudiation might occur (regardless of whether or 
not it is a fraud issue), then a relying party might adjust the factors 
they use for performing some operation (i.e. they might not care as much if 
it is a low-value withdrawal transaction for $20 than if it was a 
withdrawal transaction for $1m).

some physical contracts are now adding requirement that addition to signing 
(the last page), that people are also required to initial significant 
paragraphs at various places in the contract.

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Jerrold Leichter
| the issue in the EU FINREAD scenario was that they needed a way to
| distinguish between (random) data that got signed ... that the key owner
| never read  and the case were the key owner was actually signing to
| indicate agreement, approval, and/or authorization. They specified a
| FINREAD terminal which supposed met the requirements that the key owner had
| to have read and to some extent understood  and approved as to the
| meaning of the contents of what was being signed.
|
| However, the FINREAD specification didn't define any mechanism that
| provided proof to the relying party that a FINREAD terminal was actually
| used as part of the signature process.
Fascinating.  They are trying to re-create the historical, mainly now lost,
role of a notary public.

Notary publics came into being at a time when many people were illiterate,
and only the wealthy had lawyers.  A notary public had two distinct roles:

- To ensure that a person signing a notarized document actually
understood what he was signing;
- To witness that the signer - who might well sign with just an X -
is the person whose name appears.

The first role is long lost.  Notary publics don't look at the material being
signed.  We lost our trust in a "public" official explaining contracts - the
assumption now is that everyone gets his own lawyer.  The second role
remained, even with universal literacy.  A notary public is supposed to check
for some form of "good ID" - or know the person involved, the traditional
means.

A traditional "notary public", in modern terms, would be a tamper-resistant
device which would take as inputs (a) a piece of text; (b) a means for
signing (e.g., a hardware token).  It would first present the actual text
that is being signed to the party attempting to do the signing, in some
unambiguous form (e.g., no invisible fonts - it would provide you with a
high degree of assurance that you had actually seen every bit of what you
were signing).  The signing party would indicate assent to what was in the
text.  The notary might, or might not - depending on the "means for signing" -
then authenticate the signer further.  The notary would then pass the text to
the "means for signing", and verify that what came back was the same text,
with an alleged signature attached in a form that could not modify the text.
(E.g., if the signature were an actually RSA signature of the text, it would
have to decrypt it using the signer's public key.  But if the signature were
a marked, separate signature on a hash, then there is no reason why the notary
has to be able to verify anything about the signature.)  Finally, the notary
would sign the signed message itself.

We tend not to look at protocols like this because we've become very
distrustful of any 3rd party.  But trusted 3rd parties have always been
central to most business transactions, and they can be very difficult to
replace effectively or efficiently.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-21 Thread Anne & Lynn Wheeler
At 08:08 PM 7/18/2004, Sean Smith wrote:
Why isn't it sufficient?   (Quick: when was the last time anyone on this 
list authenticated by signing unread random data?)

The way the industry is going, user keypairs live in a desktop keystore, 
and are used for very few applications.  I'd bet the vast majority of 
usages are client-side SSL, signing, and encryption.

If this de facto universal usage suite contains exactly one authentication 
protocol that has a built-in countermeasure, then when this becomes solid, 
we're done.
so if digital signing is used for nothing else than authentication ... with 
signing of challenge data (with or with/out client-side modification) ... 
then there is no concern that something signed might be a document or 
authorization form. it is a non-problem.

EMV chipcards are supposed to be doing dynamic data RSA signing of 
authorized transactions  ... at some point, real soon now ... and the 
financial industry is writting some number of apps to be able to use the 
EMV cards for other applications.

this is from yesterday
http://www.smartcardalliance.org/industry_news/industry_news_item.cfm?itemID=1316
which talks about additional applications (in addition to expected RSA 
signing at EMV point-of-sale terminals)

* OneSMART MasterCard Authentication – ensures a higher level of security 
for online shopping and remote banking
* OneSMART MasterCard Web – allows cardholders to securely store and manage 
a wide range of personal data (such as names, addresses, URLs, log-on 
passwords) on the smart card chip
* OneSMART MasterCard Pre-Authorised – a new chip-based payment solution 
suitable for new markets and off-line payment environments

===
it doesn't give any details but possibly if the expected RSA signing at EMV 
point-of-sale terminals is an example of aggreement/approval ... then the 
authentication application may be RSA signing of some sort of challenge 
data  and i would guess that few, if any people make it a habit to 
examine presented challenge data.

part of the issue is creating an environment where all authentication 
protocols and all authentication implements are required to have 
countermeasures against dual-use attack on signing of documents or 
transactions ... means that loads of stuff have to be perfect in the future.

the other is requiring more proof regarding the signing environment to be 
carried when the signing is associated with approval, agreement, and/or 
authorization (more than simple authentication)  for instance that for 
some of the non-repudiation features (that supposedly address such issues) 
 that they have to also sign in some manner to indicate non-repudiation 
features in in place.

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-18 Thread Sean Smith
it isn't sufficient that you show there is some specific 
authentication protocol with unread, random data ... that has 
countermeasures against a dual-use attack ... but you have to 
exhaustively show that the private key has never, ever signed any 
unread random data that failed to contain dual-use countermeasure 
attack.

Why isn't it sufficient?   (Quick: when was the last time anyone on 
this list authenticated by signing unread random data?)

The way the industry is going, user keypairs live in a desktop 
keystore, and are used for very few applications.  I'd bet the vast 
majority of usages are client-side SSL, signing, and encryption.

If this de facto universal usage suite contains exactly one 
authentication protocol that has a built-in countermeasure, then when 
this becomes solid, we're done.

Our energy would be better spent on the real weaknesses: such as the 
ease of getting desktops to just cough up the private key, or to use it 
for client-side SSL without ever informing the user.

And on the real problems: such as using the standard suite to get the 
trust assertions to match the way that trust really flows in the real 
world.

--Sean








-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-18 Thread Anne & Lynn Wheeler
there is a variation on the EU FINREAD terminal that sort of provides a 
chain of trust/evidence (that has almost nothing at all to do with the 
traditional trusted third party certification authorities and their 
certificates)(

1) there ae a certain class of certified terminals with security modules, 
tamper evident, and are known to always present an accurate text of what is 
about to be signed ... and then asked the person if they agree with what 
was presented  which they have to indicate by pressing some button (or 
set of buttons)

2) these are a certain class of certified hardware tokens which contain 
unique private keys.

3) the specific certified hardware terminals are able to verify what kind 
of hardware token they are dealing with and only work with the appropriate 
hardware token

4) the specific certified hardware tokens are able to verify what kind of 
terminal they are dealing with and only work with the appropriate hardware 
terminals.

5) relying party gets a signed message
6) the relying party can verify the digital signature with a specific 
public key known to be associated with a known hardware token

7) the known hardware token is known to be in the possession of a specific 
person  which implies "something you have" authentication

8) the known hardware token is known to satisfy requirements #2 and #4
9) the corresponding terminals that the hardware token works with are known 
to satisfy requirements #1 and #4

10) given conditions 1-9, the relying party has some assurance that the 
token owner has actually read, understood, and agrees with the contents of 
the message.

In this scenario the relying party wouldn't need direct evidence included 
as part of the integrity of each message that the signing took place in an 
non-repudiation environment  the infrastructure assurance as to the 
kind of terminals, tokens, and procedures provide such indirect evidence as 
part of the infrastructure operation (aka the chain of evidence/trust 
scenario  having nothing at all to do with traditional third party 
certification authorities and their certificates).

This kind of scenario falls apart  if the hardware token ever 
digitallly signs some contents that is not provided by a trusted terminal. 
In which case the chain of evidence/trust is lost as to whether the token 
owner has read, understood, and agrees, approves, and/or authorizes the 
contents of what is being signed.

Either you
1) have some proof that every use of the specific hardware token (and its 
corresponding unique private key) digital signing always meets the 
requirements laid out as to human reading, understanding and agreeing, 
approving and/or authorizes the contents of what is being signed ... and it 
can never be used in any other way

2) or that every use of the specific hardware token (and its corresponding 
unique private key) digital signing that is purported to meet the 
requirement for human reading, understanding and agreeing, approving and/or 
authorizes the contents of what is being signed  carries in the 
integrity part of the message some indication/proof of the human reading, 
understanding and agreeing, approving and/or authorizes (and that 
indication can't be fraudulently fabricated if the hardware token was to 
ever be used in signing some message that doesn't involve 
reading/understanding/approval by the token owner).


--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-18 Thread Anne & Lynn Wheeler
At 10:36 AM 7/18/2004, Sean Smith wrote:
In SSL and TLS, the client isn't signing random data provided by the 
adversary.  Rather, the client is signing a value derived from data both 
the client and server provide as part of the handshake.  I do not believe 
it is feasible for a malicious server to choose its nonces so that the 
resulting signature be coincide with a valid signature on a delegation 
cert the client might have constructed.
the issue in the EU FINREAD scenario was that they needed a way to 
distinguish between (random) data that got signed ... that the key owner 
never read  and the case were the key owner was actually signing to 
indicate agreement, approval, and/or authorization. They specified a 
FINREAD terminal which supposed met the requirements that the key owner had 
to have read and to some extent understood  and approved as to the 
meaning of the contents of what was being signed.

However, the FINREAD specification didn't define any mechanism that 
provided proof to the relying party that a FINREAD terminal was actually 
used as part of the signature process.

Some of the non-repudiation service definitions also talk about processes 
that would provide high likelyhood that the person performing the signing 
has read, understood, and agrees with the contents of what is being signed. 
However, many of them fail to specify a mechanism that proves to a relying 
party that such a non-repudiation service was actually used.

so the dual-use attack  is if a key-owner ever, at any time, signs 
something w/o reading it ... then there is the possibility that the data 
being signed actually contains something of significant.

if there is never any proof, included as part of the integrity of the 
message ... that proves to the relying party that some sort of 
non-repudiation environment was used as part of the digital signing  
then it falls back on requiring an exhaustive proof that never in the 
history of the private key was it ever used to sign contents that were 
unread and could possibly be random.

it isn't sufficient that you show there is some specific authentication 
protocol with unread, random data ... that has countermeasures against a 
dual-use attack ... but you have to exhaustively show that the private key 
has never, ever signed any unread random data that failed to contain 
dual-use countermeasure attack.

the alternative to the exhaustive proof about every use of the private key 
 is strong proof (that is built into the integrity of the signed 
contents) that non-repudiation environment was used for the digital signing 
(strong implication that the key owner, read, understood, approves, agrees, 
and/or authorizes the contents of the message).

the NIST scenario for a exhaustive proof ... rather than exhaustive proof 
about every use of a specific private key  would be able to show that 
it is impossible to use the private key in any protocol not written by the 
people making the presentation

this came up in a SET discussion long ago and far away. it was about 
whether there was every any SET gateway protocol that could set the 
"signature verified" bit in the ISO 8583 message. One of the SET vendors 
claimed that the software they shipped was certified that it would never 
set the "signature verified" bit in the ISO 8583 message, if the signature 
hadn't actually been verified (and therefor there wasn't an infrastructure 
vulnerability). The problem was that they had created an infrastructure 
that didn't require end-to-end proof of the signature verification ... and 
they were unable to control that every ISO 8583 generated message  was 
certified as only being able to be generated by their code.  They had 
created an infrastructure vulnerability  that allowed a wide variety of 
software to be used  and was only safe if they could prove that every 
copy of code generating every ISO 8583 messages was their code and it was 
impossible to modify and/or substitute something else in the generation of 
an ISO 8583 message.

The countermeasure to the seriously flawed SET design requiring exhaustive 
proof that every ISO 8583 message that was ever created that carried the 
"signature verified" bit  could have only been created by unmodified, 
certified software  was to support end-to-end authentication. .And for 
a slight drift ... that wasn't practical in the SET design because the 
inclusion of a certificate would have represented horrendous payload bloat 
of two orders of magnitude (discussed in some detail in recent posts to 
another thread in this mailing list)

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-18 Thread Sean Smith
at the NIST PKI workshop a couple months ago  there were a number
of infrastructure presentations where various entities in the
infrastructure were ...signing random data as part of authentication 
protocol

I believe our paper may have been one of those that Lynn objected to.  
We used the same key for client-side TLS as well as for signing a 
delegation certificate.  However (as we made sure to clarify in the 
revised paper for the final proceedings):

In SSL and TLS, the client isn't signing random data provided by the 
adversary.  Rather, the client is signing a value derived from data 
both the client and server provide as part of the handshake.  I do not 
believe it is feasible for a malicious server to choose its nonces so 
that the resulting signature be coincide with a valid signature on a 
delegation cert the client might have constructed.

(On the other hand, if we're wrong, I'm sure that will be pointed out 
repeatedly here in the next day or two :)

--Sean
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: dual-use digital signature vulnerability

2004-07-18 Thread Anne & Lynn Wheeler

the fundamental issue is that there are infrastructures using the same 
public/private key pair to digital sign

1) random authentication data that signer never looks at and believe is of 
low value ... if they connect to anybody at all ... and are asked to 
digitally sign some random data for authentication purposes ... they do it.

2) contents that they supposedly have read, understood, and are indicating 
that they agree, approve and/or authorize.

i haven't seen any definition of data arriving at the relying party where 
the relying party has proof of whether it was case #1 or case #2. The 
closest was the non-repudiation bit in a certificate. however, the 
non-repudiation bit in a certificate was put in there at the time the 
certificate was manufactured and in no way applies to the environment and 
conditions under which the signature in question actually occurred.

there are definitions like non-repudiation services and/or the EU FINREAD 
definition ... which purports to specify the environment under which the 
"signatures" take place. Note however, while the EU FINREAD defines an 
environment where there is some indication that the signing party might 
have read and agreed to the contents of what is being signed  there is 
nothing in the EU FINREAD specification that would provide proof to the 
relying party that a FINREAD terminal was actually used for any specific 
signing. Anything, like a flag ... not part of a signed message ... that 
might be appended to the transmission ... that makes claims about whether a 
FINREAD terminal was used or not ... could have originated from anywhere 
 analogous to the example where a relying party might be able to 
substitute a certificate with the non-repudiation bit set  in order to 
change the burden of proof from the relying party to the signing party (in 
a legal dispute ... more the mid-90s ... where non-repudiation flag in a 
certificate might have been thought to have some valid meaning (since the 
certificate wasn't covered by the signature  anybody could claim any 
valid certificate was the certificate used for the transaction)

In any case, if a signing party has ever used their private key to sign 
random data that they haven't read . and they are ever expected to use 
the same private key in legal signing operations where they are presumed to 
have read, understood, and approve, agree, and/or authorize the contents 
 and there is no proof provided (or included) as part of the signed 
message that the signing occurred in a specified (non-repudiation) 
environment ... then there is no way that a relying party can prove or 
disprove under what conditions a digital signing actually occurred.

misc. past post reference EU FINREAD:
http://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel 
Borenstein: Carnivore's "Magic Lantern"
http://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, 
here's your private key
http://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
http://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative 
to PKI?
http://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and 
their users [was Re: Cryptogram:  Palladium Only for DRM]
http://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has 
conspicuously failed to fix
http://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm16.htm#9 example: secure computing kernel 
needed
http://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? 
Photo ID's and Payment Infrastructure
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
http://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
http://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
http://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
http://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security 
requested
http:/

Re: dual-use digital signature vulnerability

2004-07-18 Thread Anne & Lynn Wheeler
At 01:33 AM 7/18/2004, Amir Herzberg wrote:
I don't see here any problem or attack. Indeed, there is difference
between signature in the crypto sense and legally-binding
signatures. The later are defined in one of two ways. One is by the
`digital signature` laws in different countries/states; that approach
if often problematic, since it is quite tricky to define in a general
law a binding between a person or organization and a digital
signature. The other way however is fine, imho: define the digital
signature in a (`regular`) contract between the parties. The contract
defines what the parties agree to be considered as equivalent to their
(physical) signature, with well defined interpretation and
restrictions.
...
the digital signature laws, for the most part, defined how a
certification authority went about binding the owner of a public key
(or at least the entity presenting a public key and a digital
signature that could be verified by that public key) and some other
information ... and presenting that in a certificate. However, I don't
remember seeing any of the e-sign laws a) defining a non-repudiation
environment that is mandated for signature digital signing environments
(indicating that the key owner has read, understood, and approves,
agrees, and/or authorizes the contents of the message and b) as
part of the integrity of the message, there is proof that such a
non-repudiation environment was used.
1)
the relying party being able to certify the integrity level of
something like a hardware token  for use in "something you have"
authentication  aka the relying party verifies a digital signature
and that verification may used to imply "something you have"
authentication (at this point there is absolutely nothing involving a
certificate). However, in order for the relying party to be able to
assume or imply what the verification of the digital signature
actually means  and therefor how much it can trust the
verification ... it needs to know how the private key is maintained
and operated. If the act of "verifying a digital signature" actually
means or implies that it is "something you have" authentication
... then it needs to have some certification along the lines that the
private key is used and maintained in a hardware token with specific
characteristics. It has nothing at all to do with any certificate
traditionally mentioned in various kinds of e-sign laws.
2)
during the early '90s, the identity certificates tended to be
overloaded with all sorts of identity and privacy information. this
was fairly quickly realized to represent serious privacy and liability
issues. this was retrenched to things like relying-only-party
certificates that basically only had a public key and some sort of
account identifier (which could be used by the relying-party to pull
up the real information  w/o having it publicly broadcast all over
the world). However, there were also things like "non-repudiation"
bits defined in certificates ... that have since been severely
depreciated. During the mid-90s there were some infrastructures being
proposed that if you had some data which had an appended digital
signature and an appended certificate containing a non-repudiation bit
 then the burden of proof (in disputes) could be shifted from the
relying party to the signing party.
This was vulnerable to possibly two exploits
a) the digital signer had believed that they had signed random data as
part of an authentication protocol ... as opposed to having signed
some document contents indicating agreement, approval, and/or
authorization (as in real live signature  aka the dual-use
scenario) and/or
b) since the appended certificate isn't part of the signed transaction
 the relying-party might be able to find a digital certificate
(belonging to that key-owner for the same public key) that had the
non-repudiation bit set and substitute a non-repudiation certificate
for the certificate that the key-owner had actually appended (aka the
certificate is not part of the integrity of the message covered under
the digital signature).
3)
at the NIST PKI workshop a couple months ago  there were a number
of infrastructure presentations where various entities in the
infrastructure were
a) signing random data as part of authentication protocol (where the
entity performing the digital signature was given no opportunity to
view the contents being signed) they were using hardware token
implementation  and they were assuming that the verification of
the digital signature implied some sort of "something you have"
authentication. however there was nothing in the infrastructure that
provided certification and/or proof that the private key was kept and
maintained in a hardware token  so there was no proof as to the
level of integrity and/or level of trust that the relying party could
place in the verification of that digital signature
b) signing authorization documents (using the same tokens that were
used in the authe

Re: dual-use digital signature vulnerability

2004-07-18 Thread Amir Herzberg
Anne & Lynn Wheeler wrote:
ok, this is a long posting about what i might be able to reasonable assume
if a digital signature verifies (posting to c.p.k newsgroup):
... skipped (it was long :-)
the dual-use comes up when the person is 'signing" random challenges as 
purely a means of authentication w/o any requirement to read the 
contents. Given such an environment, an attack might be sending some 
valid text in lieu of random data for signature. Then the signer may 
have a repudiation defense that he hadn't signed the document (as in the 
legal sense of signing), but it must have been a dual-use attack on his 
signature (he had signed it believing it to be random data as part of an 
authentication protocol)
I don't see here any problem or attack. Indeed, there is difference 
between signature in the crypto sense and legally-binding signatures. 
The later are defined in one of two ways. One is by the `digital 
signature` laws in different  countries/states; that approach if often 
problematic, since it is quite tricky to define in a general law a 
binding between a person or organization and a digital signature. The 
other way however is fine, imho: define the digital signature in a 
(`regular`) contract between the parties. The contract defines what the 
parties agree to be considered as equivalent to their (physical) 
signature, with well defined interpretation and restrictions.

--
Best regards,
Amir Herzberg
Associate Professor, Computer Science Dept., Bar Ilan University
http://amirherzberg.com (information and lectures in cryptography & 
security)
begin:vcard
fn:Amir  Herzberg
n:Herzberg;Amir 
org:Bar Ilan University;Computer Science
adr:;;;Ramat Gan ;;52900;Israel
email;internet:[EMAIL PROTECTED]
title:Associate Professor
tel;work:+972-3-531-8863
tel;fax:+972-3-531-8863
x-mozilla-html:FALSE
url:http://AmirHerzberg.com
version:2.1
end:vcard



dual-use digital signature vulnerability

2004-07-16 Thread Anne & Lynn Wheeler
ok, this is a long posting about what i might be able to reasonable assume
if a digital signature verifies (posting to c.p.k newsgroup):
http://www.garlic.com/~lynn/2004h.html#14
basically the relying-party has certified the environment that houses the 
private key and the environment that the digital signature was done in ... 
then the verification of the digital signature might be assumed to imply 
one-factor or possibly two-factor authentication (i.e. if the relying-party 
has certified that a private key is housed in a secure hardware token and 
can never leave that hardware token, then the verification of the digital 
signature might imply one-factor, "something you have" authentication).

that establishes the basis for using digital signature for authentication 
purposes ... being able to assume that verification of the digital 
signature possibly implies "something you have" authentication (or 
something similar).

just the verification of the digital signature, however doesn't do anything 
to establish any implication about a legal signature where the "signer" is 
assumed to have read and agrees to the contents of the thing being signed 
(intention to sign the content of the document as agreement, approval, 
and/or authorization).

lets assume for argument sake that some sort of environment can be 
certified that provides a relying party some reasonable assurance that the 
signer has, in fact, read and is indicating agreement, approval, and/or 
authorization ... then there might possible be the issue of the dual-use 
vulnerability.

the dual-use comes up when the person is 'signing" random challenges as 
purely a means of authentication w/o any requirement to read the contents. 
Given such an environment, an attack might be sending some valid text in 
lieu of random data for signature. Then the signer may have a repudiation 
defense that he hadn't signed the document (as in the legal sense of 
signing), but it must have been a dual-use attack on his signature (he had 
signed it believing it to be random data as part of an authentication 
protocol).

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]