Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Ed Reed wrote:
>There are many business uses for such things, like checking to see
>if locked down kiosk computers have been modified (either hardware
>or software),

I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.  So the problem statement sounds a little
contrived to me -- but I don't really know anything about kiosks,
so maybe I'm missing something.

In any case, this is an example of an application where owner-directed
remote attestation suffices, so one could support this application
without enabling any of the alleged harms.  (See my previous email.)
In other words, this application is consistent with an "Owner Override".

>verifying that users have not excercised their god-given
>right to install spy-ware and viruses (since they're running with
>administrative priviledges, aren't they?),

It sounds like the threat model is that the sysadmins don't trust the
users of the machine.  So why are the sysadmins giving users administrator
or root access to the machine?  It sounds to me like the real problem
here is a broken security architecture that doesn't match up to the
security threat, and remote attestation is a hacked-up patch that's not
going to solve the underlying problems.  But that's just my reaction,
without knowing more.

In any case, this application is also consistent with owner-directed
remote attestation or an "Owner Override".

>and satisfying a consumer
>that the server they're connected to is (or isn't) running software
>that
>records has adequate security domain protections to protect the users
>data (perhaps backup files) the user entrusts to the server.

If I don't trust the administrators of that machine to protect sensitive
data appropriately, why would I send sensitive data to them?  I'm not
sure I understand the threat model or the problem statement.

But again, this seems to be another example application that's compatible
with owner-directed remote attestation or an "Owner Override".


Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an "Owner
Override" would not disturb these applications.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread David Wagner
Rick Wash  wrote:
>There are many legitimate uses of remote attestation that I would like to
>see.  For example, as a sysadmin, I'd love to be able to verify that my
>servers are running the appropriate software before I trust them to access
>my files for me.  Remote attestation is a good technical way of doing that.

This is a good example, because it brings out that there are really
two different variants of remote attestation.  Up to now, I've been
lumping them together, but I shouldn't have been.  In particular, I'm
thinking of owner-directed remote attestation vs. third-party-directed
remote attestation.  The difference is who wants to receive assurance of
what software is running on a computer; the former mechanism allows to
convince the owner of that computer, while the latter mechanism allows
to convince third parties.

If I understand correctly, TCPA and Palladium provide third-party-directed
remote attestation.  Intel, or Dell, or someone like that will generate
a keypair, embed it inside the trusted hardware that comes with your
computer, and you (the owner) are never allowed to learn the corresponding
private key.  This allows your computer to prove to Intel, or Dell, or
whoever, what software is running on your machine.  You can't lie to them.

In owner-directed remote attestation, you (the owner) would generate the
keypair and you (the owner) would learn the private key -- not Intel, or
Dell, or whoever.  This allows your computer to prove to you what software
is running on your machine.  However, you can't use this to convince Intel,
or Dell, or anyone else, what software is running your machine, unless they
know you and trust you.

I -- and others -- have been arguing that it is remote attestation that
is the key, from a policy point of view; it is remote attestation that
enables applications like DRM, software lock-in, and the like.  But this
is not quite right.  Rather, it is presence of third-party-directed remote
attestation that enables these alleged harms.

Owner-directed remote attestation does not enable these harms.  If I know
the private key used for attestation on my own machine, then remote attestation
is not very useful to (say) Virgin Records for DRM purposes, because I could
always lie to Virgin about what software is running on my machine.  Likewise,
owner-directed remote attestation doesn't come with the risk of software
lock-in that third-party-directed remote attestation creates.

So it seems that third-party-directed remote attestation is really where
the controversy is.  Owner-directed remote attestation doesn't have these
policy tradeoffs.

Finally, I'll come back to the topic you raised by noting that your
example application is one that could be supported with owner-directed
remote attestation.  You don't need third-party-directed remote
attestation to support your desired use of remote attestation.  So, TCPA
or Palladium could easily fall back to only owner-directed attestation
(not third-party-attestation), and you'd still be able to verify the
software running on your own servers without incurring new risks of DRM,
software lock-in, or whatever.

I should mention that Seth Schoen's paper on Trusted Computing anticipates
many of these points and is well worth reading.  His notion of "owner
override" basically converts third-party-directed attestation into
owner-directed attestation, and thereby avoids the policy risks that so
many have brought up.  If you haven't already read his paper, I highly
recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Jerrold Leichter wrote:
>|> *Any* secure computing kernel that can do
>|> the kinds of things we want out of secure computing kernels, can also
>|> do the kinds of things we *don't* want out of secure computing kernels.

David Wagner wrote:
>| It's not hard to build a secure kernel that doesn't provide any form of
>| remote attestation, and almost all of the alleged harms would go away if
>| you remove remote attestation.  In short, you *can* have a secure kernel
>| without having all the kinds of things we don't want.

Jerrold Leichter wrote:
>The question is not whether you *could* build such a thing - I agree, it's
>quite possible.  The question is whether it would make enough sense that it
>would gain wide usage.  I claim not.

Good.  I'm glad we agree that one can build a remote kernel without
remote attestation; that's progress.  But I dispute your claim that remote
attestation is critical to securing our machines.  As far as I can see,
remote attestation seems (with some narrow exceptions) pretty close to
worthless for the most common security problems that we face today.

Your argument is premised on the assumption that it is critical to defend
against attacks where an adversary physically tampers with your machine.
But that premise is wrong.

Quick quiz: What's the dominant threat to the security of our computers?
It's not attacks on the hardware, that's for sure!  Hardware attacks
aren't even in the top ten.  Rather, our main problems are with insecure
software: buffer overruns, configuration errors, you name it.

When's the last time someone mounted a black bag operation against
your computer?  Now, when's the last time a worm attacked your computer?
You got it-- physical attacks are a pretty minimal threat for most users.

So, if software insecurity is the primary problem facing us, how does
remote attestation help with software insecurity?  Answer: It doesn't, not
that I can see, not one bit.  Sure, maybe you can check what software is
running on your computer, but that doesn't tell you whether the software
is any good.  You can check whether you're getting what you asked for,
but you have no way to tell whether what you asked for is any good.

Let me put it another way.  Take a buggy, insecure application, riddled
with buffer overrun vulnerabilities, and add remote attestation.  What do
you get?  Answer: A buggy, insecure application, riddled with buffer
overrun vulnerabilities.  In other words, remote attestation doesn't
help if your trusted software is untrustworthy -- and that's precisely
the situation we're in today.  Remote attestation just doesn't help with
the dominant threat facing us right now.

For the typical computer user, the problems that remote attestation solves
are in the noise compared to the real problems of computer security
(e.g., remotely exploitable buffer overruns in applications).  Now,
sure, remote attestation is extremely valuable for a few applications,
such as digital rights management.  But for typical users?  For most
computer users, rather than providing an order of magnitude improvement
in security, it seems to me that remote attestation will be an epsilon
improvement, at best.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Jerrold Leichter
| On Dec 27, 2003, at 10:01 AM, Ben Laurie wrote:
| >> "Note that there is no theoretical reason that it should be possible
| >> to figure out the public key given the private key, either, but it so
| >> happens that it is generally possible to do so"
| >> So what's this "generally possible" business about?
| >
| > Well, AFAIK its always possible, but I was hedging my bets :-) I can
| > imagine a system where both public and private keys are generated from
| > some other stuff which is then discarded.
|
| Sure.  Imagine RSA where instead of a fixed public exponent (typically
| 2^16 + 1), you use a large random public exponent.  After computing the
| private exponent, you discard the two primes and all other intermediate
| information, keeping only the modulus and the two exponents.  Now it's
| very hard to compute either exponent from the other, but they do
| constitute a public/private key-pair.  The operations will be more
| expensive that in standard RSA where one party has a small exponent and
| the other party has an arithmetical shortcut, but still far less
| computation than cracking the other party's key.
This doesn't work for RSA because given a single private/public key pair, you
can factor.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread bear


On Tue, 23 Dec 2003, Seth David Schoen wrote:

>When attestation is used, it likely will be passed in a service like
>HTTP, but in a documented way (for example, using a protocol based on
>XML-RPC).  There isn't really any security benefit obtained by hiding
>the content of the attestation _from the party providing it_!

It's not the parties who are interested in security alone we're worried
about.  There is an advantage in profiling and market research, so I
expect anyone able to effectively subvert the protocols to attempt
to hide the content of remote attestataion.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Repudiating non-repudiation

2003-12-29 Thread robin benson
On 29 Dec 2003, at 19:29, Paul A.S. Ward wrote:

This first case is actually quite amusing.  I was recently the subject 
of identity theft.
Specifically, the thieves had my SSN (SIN, actually, since it is in 
Canada), and my
driver's licence number.  They produced a fake driver's licence, and 
used it to open
bank accounts in my name.  When this all came to light, the bank 
wanted a notarized
document that said that I did not open these accounts or know anything 
about them.
And what was required for notarization?  I had to go to city hall and 
get someone
who had never met me before to look at my photo ID (which was my 
drivers licence)
and sign the form saying it was me!  Great system!
A friend of mine went through the same city hall process in the US, 
although for a different reason (still in the context of proof of 
identity) and was given a dot-matrix printout which was then considered 
good enough for somebody else who had previously declared a current 
passport as falling short of the mark.

Robin
-
robin benson
[EMAIL PROTECTED]
+44 114 2303764
+44 7967 354544
hammerhead media limited
www.hammerheadmedia.co.uk
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Eric Rescorla
Jerrold Leichter <[EMAIL PROTECTED]> writes:

> | > "Note that there is no theoretical reason that it should be
> | > possible to figure out the public key given the private key,
> | > either, but it so happens that it is generally possible to
> | > do so"
> | >
> | > So what's this "generally possible" business about?
> |
> | Well, AFAIK its always possible, but I was hedging my bets :-) I can
> | imagine a system where both public and private keys are generated from
> | some other stuff which is then discarded.
> That's true of RSA!  The public and private keys are indistinguishable - you
> have a key *pair*, and designate one of the keys as public.  Computing either
> key from the other is as hard as factoring the modulus.  (Proof:  Given both
> keys in the pair, it's easy to factor.)

It's worth pointing out that this isn't how RSA is used in practice,
for two reasons:

(1) Most everyone uses one of 3 popular RSA public exponents
(3, 17, 65535) and then computes the private key from p and q.
(2) PKCS-1 RSAPrivateKey structures contain the public key.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

2003-12-29 Thread Anne & Lynn Wheeler
On Mon, 2003-12-29 at 10:16, Rich Salz wrote:
> Not sure what the guy meant by that.  But yes, SAML flows are "just 
> like" Kerberos flows.  And Liberty and WS-Federation look a lot like DCE 
> cross-cell (er, Kerberos inter-realm) flows. After all, there's only not 
> many ways to do secure online trusted third-party authentication.
>   /r$

talking to the guy after the presentation, i got the impression that
they probably exactly copied the kerberos flows ... didn't even try to
come up with something that turned out to be similar.

there were 30-40 people in the audience and I expected more people in
the audience to have participated in discussion about kerberos vis-a-vis
saml.

kerberos had come out of project athena that had been substantially
jointly funded by two corporations ... project athena had a director
from mit and two assistant directors, one from each of the funding
corporations. one of them i had worked with for a long time when at
science center at 545 tech sq. (random refs):
http://www.garlic.com/~lynn/subtopic.html#545tech

during the period we were doing hsdt & ha/cmp ... my wife and I also got
to go by and do audits of progress of various project athena activities
(including kerberos). 
One visit we had a lengthy overview and discussion of the recently
(then) developed cross-domain protocol.

-- 
Anne & Lynn Wheeler -  http://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Repudiating non-repudiation

2003-12-29 Thread Paul A.S. Ward
Jerrold Leichter wrote:

D. Self-authentication: A few types of documents are
"self-authenticating," because they are so likely to be what they
seem, that no testimony or other evidence of their genuineness need be
produced. [474 - 475]
		1. State provisions: Under most state statutes, the following
		are self-authenticating: (1) deeds and other instruments that
		are notarized; 

This first case is actually quite amusing.  I was recently the subject 
of identity theft.
Specifically, the thieves had my SSN (SIN, actually, since it is in 
Canada), and my
driver's licence number.  They produced a fake driver's licence, and 
used it to open
bank accounts in my name.  When this all came to light, the bank wanted 
a notarized
document that said that I did not open these accounts or know anything 
about them.
And what was required for notarization?  I had to go to city hall and 
get someone
who had never met me before to look at my photo ID (which was my drivers 
licence)
and sign the form saying it was me!  Great system!

--

Paul A.S. Ward, Assistant Professor  Email: [EMAIL PROTECTED]
University of Waterloo  [EMAIL PROTECTED]
Department of Computer Engineering   Tel: +1 (519) 888-4567 ext.3127
Waterloo, OntarioFax: +1 (519) 746-3077
Canada N2L 3G1   URL: http://www.ccng.uwaterloo.ca/~pasward


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-29 Thread Eric S. Johansson
Bill Stewart wrote:

At 09:37 PM 12/26/2003 -0500, Adam Back wrote:

The 2nd memory [3] bound paper (by Dwork, Goldber and Naor) finds a
flaw in in the first memory-bound function paper (by Adabi, Burrows,
Manasse, and Wobber) which admits a time-space trade-off, proposes an
improved memory-bound function and also in the conclusion suggests
that memory bound functions may be more vulnerable to hardware attack
than computationally bound functions.  Their argument on that latter
point is that the hardware attack is an economic attack and it may be
that memory-bound functions are more vulnerable to hardware attack
because you could in their view build cheaper hardware more []


Once nice thing about memory-bound functions is that,
while spammers could build custom hardware farms in Florida or China,
a large amount of spam is delivered by hijacked PCs or abused 
relays/proxies,
which run on standard PC hardware, not custom, so it'll still be slow.
do the math.

d*b
---
 s
where: d = stamp delay in seconds
   s = spam size in bytes
   b = bandwidth in bytes per second
assuming unlimited bandwidth, if a stamp spammer compromises roughly the 
same number of PCs as were compromised during the last worm attack 
(350,000) at 15 seconds per stamp, you end up with 1.4 million stamps 
per minute or 2 billion stamps per day.  When you compare that to the 
amount of spam generated per day (high hundred billion to low trillion), 
they are still a few machine short of what is necessary to totally 
render stamps useless.  Yes, maybe one spammer could muster a few 
machines to be a nuisance but that's the extent of it.

When dealing with hardware acceleration, it becomes a hardware war.  If 
they can make a custom hardware, Taiwan can make us USB stamp 
generators, postage goes to a period of rapid inflation, and the world 
goes back to where was before with no advantage to spammer's.

Penny Black or any other system that involves tweaking the email protocols
gets a one-time win in blocking spam, because older badly-administered
mail relays won't be running the new system - if their administrators
upgrade them to support the new features, hopefully that will turn off
any relay capabilities.  That doesn't apply to cracked zombie machines,
since the crackers can install whatever features they need,
but at least all of those Korean cable-modem boxes won't run it.
again, work the numbers to figure out the basic model and where the 
threat roughly lives.  Personally, I think that any system that tweaks 
the e-mail protocols basically loses for reasons of adoption and 
backwards compatibility.  I've put a lot of effort into the camram 
implementation to create significant backwards compatibility without 
leaving someone vulnerable to spam.

also, zombied machines are a threat but the beauty of any proof of work 
system is that the machine will start overheating if it's used too much 
and the CPU load will become noticeable to the user.  So in a way, stand 
generating zombies might actually do the net some good and takeout these 
machines.  or cause another blackout in New York State...

---eric

--
Speech recognition in use.  Incorrect endings, words, and case is
closer than it appears
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Jerrold Leichter
| > "Note that there is no theoretical reason that it should be
| > possible to figure out the public key given the private key,
| > either, but it so happens that it is generally possible to
| > do so"
| >
| > So what's this "generally possible" business about?
|
| Well, AFAIK its always possible, but I was hedging my bets :-) I can
| imagine a system where both public and private keys are generated from
| some other stuff which is then discarded.
That's true of RSA!  The public and private keys are indistinguishable - you
have a key *pair*, and designate one of the keys as public.  Computing either
key from the other is as hard as factoring the modulus.  (Proof:  Given both
keys in the pair, it's easy to factor.)

Merkle's knapsack systems (which didn't work out for other reasons) had the
property that the public key was computed directly from the private key.
(The private key had a special form, while the public key was supposed to
look like a random instance of the knapsack problem.)

Obviously, a system in which the private key could be computed easily from the
public key would not be useful for encryption; so we've covered all the
meaningful "is computable from" bases
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

2003-12-29 Thread Rich Salz
I asked the guy making the presentation about the similarity to Kerberos 
message flows and he said something to the effect of ah yes, kerberos.
Not sure what the guy meant by that.  But yes, SAML flows are "just 
like" Kerberos flows.  And Liberty and WS-Federation look a lot like DCE 
cross-cell (er, Kerberos inter-realm) flows. After all, there's only not 
many ways to do secure online trusted third-party authentication.
	/r$
--
Rich Salz, Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway   http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-29 Thread Matt Crawford
On Dec 27, 2003, at 10:01 AM, Ben Laurie wrote:
"Note that there is no theoretical reason that it should be possible 
to figure out the public key given the private key, either, but it so 
happens that it is generally possible to do so"
So what's this "generally possible" business about?
Well, AFAIK its always possible, but I was hedging my bets :-) I can 
imagine a system where both public and private keys are generated from 
some other stuff which is then discarded.
Sure.  Imagine RSA where instead of a fixed public exponent (typically 
2^16 + 1), you use a large random public exponent.  After computing the 
private exponent, you discard the two primes and all other intermediate 
information, keeping only the modulus and the two exponents.  Now it's 
very hard to compute either exponent from the other, but they do 
constitute a public/private key-pair.  The operations will be more 
expensive that in standard RSA where one party has a small exponent and 
the other party has an arithmetical shortcut, but still far less 
computation than cracking the other party's key.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Repudiating non-repudiation

2003-12-29 Thread Jerrold Leichter
Ian's message gave a summary that's in my accord with how courts work.  Since
lawyers learn by example - and the law grow by and example - here's a case
that I think closely parallels the legal issues in repudiation of digital
signature cases.  The case, which if I remember right (from hearing about it
20 years ago from a friend in law school) is known informally as the
Green Giant Peas case, and forms one of the bases of modern tort liability.

The beginning of the 20th century lead to the first mass production, distri-
bution, and marketing of foods.  Before that, you bought "peas".  Now, you
could buy a can of "Green Giant Peas", sold by a large manufacturer who sold
through stores all over the place, and advertised for your business.

Someone bought a can of Green Giant Peas at a local store.  The can contained
metal shavings.  The purchaser we injured, and sued Green Giant.  One of the
defenses Green Giant raised was:  Just because it says Green Giant on the label
doesn't *prove* Green Giant actually packed the stuff!  The plaintiff must
first prove that these peas really were packed by Green Giant.  Such defenses
had worked in the past - there are many of the same general flavor, insisting
that no recovery should be possible unless plaintiff could reach a level of
proof that was inherently unreachable.  In this case, the courts finally
threw out this defense.  I can't find the actual case on line, but at
http://www.lawspirit.com/legalenglish/handbook/evid08.htm (a curious site -
it seems to be mainly in Chinese) the following text appears:

D. Self-authentication: A few types of documents are
"self-authenticating," because they are so likely to be what they
seem, that no testimony or other evidence of their genuineness need be
produced. [474 - 475]

1. State provisions: Under most state statutes, the following
are self-authenticating: (1) deeds and other instruments that
are notarized; (2) certified copies of public records (e.g., a
certified copy of a death certificate); and (3) books of
statutes which appear to be printed by a government body
(e.g., a statute book appearing to be from a sister state or
foreign country).

2. Federal Rules: FRE 902 recognizes the above three classes,
and also adds: (1) all "official publications" (not just
statutes); (2) newspapers or periodicals; and (3) labels,
signs, or other inscriptions indicating "ownership, control,
or origin" (e.g., a can of peas bearing the label "Green Giant
Co." is self-authenticating as having been produced by Green
Giant Co.).

"Self-authenticating" here seems very close in concept to what we are trying
to accomplish with digital signatures - and the Green Giant example shows how
the law grows to encompass new kinds of objects.  But it's also important to
look at how "self-authentication" is actually implemented.  Nothing here is
absolute.  What we have is a shift of the burden of proof.  In general, to
introduce a document as evidence, the introducer has to provide some
proof that the document is what it purports to be.  No such proof is
required for self-authenticating documents.  Instead, the burden shifts to
the opposing console to offer proof that the document is *not* what it
purports to be.  This is as far as the courts will ever be willing to go.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-29 Thread Ben Laurie
Amir Herzberg wrote:

At 04:20 25/12/2003, Carl Ellison wrote:
...
If you want to use cryptography for e-commerce, then IMHO you 
need a
contract signed on paper, enforced by normal contract law, in which one
party lists the hash of his public key (or the whole public key) and says
that s/he accepts liability for any digitally signed statement that 
can be
verified with that public key.


Of course! I fully agree; in fact the first phase in the `trusted 
delivery layer` protocols I'm working on is exactly that - ensuring that 
the parties (using some external method) agreed on the keys and the 
resulting liability. But when I define the specifications, I use 
`non-repudiation` terms for some of the requirements. For example, the 
intuitive phrasing of the Non-Repudiation of Origin (NRO) requirement 
is: if any party outputs an evidence evid s.t. valid(agreement, evid, 
sender, dest, message, time-interval, NRO), then either the sender is 
corrupted or sender originated message to the destination dest during 
the indicated time-interval. Notice of course that sender here is an 
entity in the protocol, not the human being `behind` it. Also notice 
this is only intuitive description, not the formal specifications.
What you have here is evidence of origin, not non-repudiation.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-29 Thread Ben Laurie
Carl Ellison wrote:

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Stefan Kelm
Sent: Tuesday, December 23, 2003 1:44 AM
To: [EMAIL PROTECTED]
Subject: Re: Non-repudiation (was RE: The PAIN mnemonic)


Ah. That's why they're trying to rename the corresponding keyUsage bit
to "contentCommitment" then:
 http://www.pki-page.info/download/N12599.doc

:-)

Cheers,

	Stefan.


Maybe, but that page defines it as:

--

contentCommitment: for verifying digital signatures which are intended to
signal that the signer is committing to the content being signed. The
precise level of commitment, e.g. "with the intent to be bound" may be
signaled by additional methods, e.g. certificate policy.
Since a content commitment signing is considered to be a digitally signed
transaction, the digitalSignature bit need not be set in the certificate. If
it is set, it does not affect the level of commitment the signer has endowed
in the signed content.
Note that it is not incorrect to refer to this keyUsage bit using the
identifier nonRepudiation. However, the use this identifier has been
deprecated. Regardless of the identifier used, the semantics of this bit are
as specified in this standard.
--

Which still refers to the "signer" having an "intent to be bound".  One can
not bind a key to anything, legally, so the signer here must be a human or
organization rather than a key.  It is that unjustifiable linkage from the
actions of a key to the actions of one or more humans that needs to be
eradicated from the literature.
This is going a little far, isn't it? If the human controls the setting 
of the bit, then it is signalling their intent.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-29 Thread Ben Laurie
Carl Ellison wrote:
If you want to use cryptography for e-commerce, then IMHO you need a
contract signed on paper, enforced by normal contract law, in which one
party lists the hash of his public key (or the whole public key) and says
that s/he accepts liability for any digitally signed statement that can be
verified with that public key.
One of the things my paper discusses is that under UK law a signature on 
an email is just as binding as on paper, because contracts are all about 
intent to be bound and not the medium in which they are captured. Of 
course, if you want to repudiate an email it is probably easier, 
especially if you signed it by typing your name at the bottom (yes, this 
is a valid signature under UK law), but that's a judgement call on the 
part of the relying party.

Any attempt to just assume that someone's acceptance of a PK
certificate amounts to that contract is extremely dangerous, and might even
be seen as an attempt to victimize a whole class of consumers.
Agreed - as I say, its all about intent and reliance. Nothing is automatic.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-29 Thread Ben Laurie
Amir Herzberg wrote:

Ian proposes below two draft-definitions for non-repudiation - legal and 
technical. Lynn also sent us a bunch of definitions. Let's focus on the 
technical/crypto one for now - after all this is a crypto forum (I agree 
the legal one is also somewhat relevant to this forum).

In my work on secure e-commerce, I use (technical, crypto) definitions 
of non-repudiation, and consider these as critical to many secure 
e-commerce problems/scenarios/requirements/protocols. Having spent 
considerable time and effort on appropriate definitions and analysis 
(proofs), I was/am a bit puzzled and alarmed to find that others in our 
community seem so vehemently against non-repudiation.

Of course, like other technical terms, there can be many variant 
definitions; that is not really a problem (the community will gradually 
focus on few important and distinct variants). Also it's an unavoidable 
fact of life (imho) that other communities (e.g. legal) use the same 
term in somewhat different meaning.

So my question is only to people like Ben and Carl who have expressed, 
if I understood correctly, objection to any form of technical, crypto 
definition of non-repudiation. I repeat: do you really object and if so 
why?
I object because its not a technical, crypto concept. It doesn't matter 
what you do to try to achieve non-repudiation technically, I can always 
repudiate it - all I have to do is say "I didn't sign that" or "it 
wasn't me that initiated that transaction".

What of applications/scenarios that seem to require 
non-repudiation, e.g. certified mail, payments, contract signing,...?
These do not require non-repudiation in the existing world, why do they 
suddenly need it when they become electronic?

What I presume you are trying to get at is to distinguish the use of a 
key with an intent to bind you rather than with an intent to provide 
authentication (or some other service signing can provide). This is not 
non-repudiation, it's something else, and it only confuses matters to 
use the wrong word for it.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-29 Thread R. A. Hettinga
At 11:49 AM -0800 12/28/03, Jim Gillogly wrote:
>wouldn't it be preferable to prove that you've contributed
>the same amount of power to a useful compute-bound project, such as
>NFSNET.org or GIMPS or [EMAIL PROTECTED] or [EMAIL PROTECTED]

Simple economics. If you're going to go so far as using some cryptographic
proof of paying a transfer-price ("work", "good" or otherwise :-)), you
might as well just pay a straight price to the recipient instead, in the
same way that "contributed" cycles for such efforts like the above -- and
open source software, for that matter -- would increase dramatically if
transaction costs were low enough to *pay* for such things as they're
produced.


Certainly the implicit price of "free" email keeps going up. I notice,
lately, that the less-than-a-year-old Bayesian filters on Eudora are
completely circumvented now by messages containing a word-salad of a
hundred random words or so and a web-bug graphic containing the spam
message inside. Frankly, I expect that the only thing of real value is the
web-bug itself, which proves that a message has been received by a working
email address, so it can be sold to other spammers in some greater-fool
market of email lists.



As most people here learned a long time ago, the only real way to solve the
spam problem is with good old fashioned cash, payable to the recipient of
the message: "A white list for my friends, all others pay cash".

Everything else is just barter and transfer pricing. Though I might grant
that such stuff *might* make a good first step to some cash-settled
end-state, I personally think it's a waste of, well, time, if not actual
money. I suppose proving it doesn't work is worth something, I suppose.
And, now that I think of it, stuff like camram and hash cash *did* require
the invention of exchange protocol of some kind, which is, actually, quite
necessary to exchanging real money for service later on.


Which is one way of saying that doing cash-settled transactions for mail at
the SMTP-protocol level just not that hard anymore, people. And, the more
spam there is, the easier it becomes. Think of it as the charging of an
economic capacitor or something, and I think lots of people on this list
will be in a position to earn some serious money when it goes off.

Of course, it *would* be fun to calculate what the threshold transaction
cost might be to make this happen or something, but like all financial
experiments, this stuff must be observed in an actual market, or nobody
will believe the data anyway. It's just a matter of integrating, and not
necessarily writing, code, now.


Again, the cost of doing this keeps falling, and the cost of "free" email
keeps going up. Somebody's going to just stick a mint onto an online store
of value and see if it works someday. Then things are going to get
interesting.


Cheers,
RAH

-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-29 Thread Bill Stewart
At 09:37 PM 12/26/2003 -0500, Adam Back wrote:
The 2nd memory [3] bound paper (by Dwork, Goldber and Naor) finds a
flaw in in the first memory-bound function paper (by Adabi, Burrows,
Manasse, and Wobber) which admits a time-space trade-off, proposes an
improved memory-bound function and also in the conclusion suggests
that memory bound functions may be more vulnerable to hardware attack
than computationally bound functions.  Their argument on that latter
point is that the hardware attack is an economic attack and it may be
that memory-bound functions are more vulnerable to hardware attack
because you could in their view build cheaper hardware more []
Once nice thing about memory-bound functions is that,
while spammers could build custom hardware farms in Florida or China,
a large amount of spam is delivered by hijacked PCs or abused relays/proxies,
which run on standard PC hardware, not custom, so it'll still be slow.
Penny Black or any other system that involves tweaking the email protocols
gets a one-time win in blocking spam, because older badly-administered
mail relays won't be running the new system - if their administrators
upgrade them to support the new features, hopefully that will turn off
any relay capabilities.  That doesn't apply to cracked zombie machines,
since the crackers can install whatever features they need,
but at least all of those Korean cable-modem boxes won't run it.




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]