RE: Difference between TCPA-Hardware and a smart card (was: examp le: secure computing kernel needed)

2004-01-04 Thread McMeikan, Andrew
Just some thoughts.

> -Original Message-
> From: Anne & Lynn Wheeler [mailto:[EMAIL PROTECTED]
> Sent: Sunday, 21 December 2003 5:40 AM
> To: Ernst Lippe
> Cc: Jerrold Leichter; [EMAIL PROTECTED]
> Subject: Re: Difference between TCPA-Hardware and a smart card (was:
> example: secure computing kernel needed)



> So what might convince institutions to accept a consumer
> presented hardware token for authentication ... as opposed
> to mandating that the only hardware token that they will
> trust are the ones provided by the institution.
> 
> -- 
> Anne & Lynn Wheeler -  http://www.garlic.com/~lynn/ 

Let me twist that paragraph a little.

Hardware tokens are definately best for the owner, the owner will trust the
token (assuming they can control/load/program it)

Taking the goal of everyday people in the role of consuming desire to trust
their token they should then have full ownership rights (i.e. under the hood
control or choice of competing secure products), why then would anyone else
trust them?

1.It would have to be a nice open standard, not prone to attack.
2.Since it would be owned by the user it must be cheap to implement.
3.It would need to be widely accepted.
4.Not easily destroyed by a hostile transaction.
 
1- GPG signed invoice,approval,reciept process
2- tough, might get their if phones could do gpg, other options would be
cheap palm running gpg or gpg smartcard
3- not going to happen today, could be forced by goverment control but gov.
system unlikely to benefit people, but may cause black system to rise up
4- non contact, IR beaming, bluetooth, induction. Assuming public key
exchange, privacy is maintained.

Hardware cost and lack of public percieved need will hold this back.  Once
cost and need find a balance then payment processors will start appearing.

Sadly I only see this all happening only as a response to an oppressive
financial transaction law.

Terrorisim could be stamped out simply by only allowing loyal people to buy
food, in fact any undesirables could be stamped out in very short order once
money came under tight control.

Stick an id chip in peoples hands so their token only works for them (to
unlock private key for signing).
ID chip could carry toxin in case person proves dis-loyal.

It could be a simple matter to enforce such a system.
1.ban cash as terrorist/black market/tax avoidance tool
2.show insecurity of existing cash replacements (stolen tokens, lending to
terrorists)
3.mandate goverment provided solution with ID chip
4.each public key must be certified by gov.
5.anyone using a key unmarked by gov is terrorist.
6.combine with rfid's in all products to determine who buys items that
combine to indicate terrorism
7.activate toxin in terrorists
8.world free of terror!?!?

Such a system could be implemented in a few short years. World free of
terror by 2006?

The only danger to such a world of peace would be those who refuse goverment
signed keys and use their own payment provider and trade amounst themselves,
they would have to be hunted down seperately.

Did I miss anything?

cya,Andrew...

This e-mail and any attachment is for authorised use by the intended recipient(s) 
only. It may contain proprietary material, confidential information and/or be subject 
to legal privilege. It should not be copied, disclosed to, retained or used by, any 
other party. If you are not an intended recipient then please promptly delete this 
e-mail and any attachment and all copies and inform the sender. Thank you.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Fwd: Re: Non-repudiation (was RE: The PAIN mnemonic)]

2004-01-04 Thread Ian Grigg
Ben Laurie wrote:
> 
> My co-author (a lawyer) responds in detail to Ian Grigg's criticisms.


Thanks for that!  As I'm not clear whether the status of
the paper is searching of (more, further) detailed criticisms,
I've not commented directly on Mr Bohm's remarks.  For the
most part, we are in agreement.

Rather, I'll just quickly mention where I find one large
difference of opinion:

It's pretty apparent that what passes for common sense and
knowledge of the meaning of words in the legal fraternity
doesn't necessarily translate to our world of techies.  I
found the key to this debate was in understanding the full
meaning of the word "repudiate" and that involved careful
scrutiny of several dictionaries.

The same goes for legal concepts such as presumptions,
application of law, and so forth - Mr Bohm nailed me on
my woeful understanding of rebuttals, and he'd have no
trouble nailing the average techie who asserts that private
key signatures prove this or that:  they do no such thing,
they provide evidence, yet, we still face a decade-old
obsession with constructing cryptographic systems that
purport to prove away all risks.

So, I personally don't accept the argument that common
sense can fill in the gaps.  If common sense and ordinary
knowledge had been available in such liberal doses, we
wouldn't have spent the last decade or so working with
non-repudiation.

But, it is only by going through these discussions that I
feel I now have a much firmer understanding of why non-
repudiation is a crock.  So thank you all!

Which leaves the issue of what we call the property that
differentiates a private key signature from a MAC or MD?

iang


PS: to refresh:
http://www.apache-ssl.org/tech-legal.pdf

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Jerrold Leichter
| David Wagner writes:
|
| > To see why, let's go back to the beginning, and look at the threat
| > model.  If multiple people are doing shared development on a central
| > machine, that machine must have an owner -- let's call him Linus.  Now
| > ask yourself: Do those developers trust Linus?
| >
| > If the developers don't trust Linus, they're screwed.  It doesn't how
| > much attestation you throw at the problem, Linus can always violate their
| > security model.  As always, you've got to trust "root" (the system
| > administrator); nothing new here.
| >
| > Consequently, it seems to me we only need to consider a threat model
| > where the developers trust Linus.  (Linus need not be infallible, but the
| > developers should believe Linus won't intentionally try to violate their
| > security goals.)  In this case, owner-directed attestation suffices.
| > Do you see why?  Linus's machine will produce an attestation, signed
| > by Linus's key, of what software is running.  Since the developers trust
| > Linus, they can then verify this attestation.  Note that the developers
| > don't need to trust each other, but they do need to trust the owner/admin
| > of the shared box.  So, it seems to me we can get by without third-party
| > attestation.
|
| You could conceivably have a PC where the developers don't trust
| Linus, but instead trust the PC manufacturer.  The PC manufacturer
| could have made it extremely expensive for Linus to tamper with the PC
| in order to "violate [the developers'] security model".  (It isn't
| logically impossible, it's just extremely expensive.  Perhaps it costs
| millions of dollars, or something.)
Precisely - though see below.

| There are computers like that today.  At least, there are devices that can
| run software, that are highly tamper-resistant, and that can do attestations.
Smart cards are intended to work this way, too.

| (Now there is an important question about what the cost to do a hardware
| attack against those devices would be.)  It seems to me to be a good thing
| that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
| about security for colocated machines, described using devices like
| these for colocation where it's not appropriate or desirable to rely on
| the physical security of the colocated machine.  Of course, strictly
| speaking, all security always relies on physical security.)
This kind of thing goes *way* back.  In the '70's, there was a company - I
think the name was BASIC 4 - that sold a machine with two privileged levels.
The OS ran at level 1 (user code at unprivileged level 2, of course).  There
were some things - like, probably, accounting - that ran at level 0.  Even
with physical access to the machine, it was supposed to be difficult to do
anything to level 0 - unless you had a (physical) key to use in the lock
on the front panel.  The machine was intended as a replacement for the then-
prevalent time-sharing model:  An application developer would buy machines
from the manufacturer, load them with application environments, and sell
application services.  Users of the machines could use the applications with
fast local acceess, even do development - but could not modify the basic
configuration.  I know the company vanished well before networks got fast
enough, and PC's cheap enough, the the business model stopped making any
sense; but I know nothing of the details.

| I don't know how the key management works in these devices.  If the
| keys used to sign attestations are loaded by (or known to) the device
| owner, it wouldn't help with the case where the device owner is
| untrusted.  If the keys are loaded by the manufacturer, it might
| support a model where the owner is untrusted and the manufacturer is
| trusted.
There's no more reason that the manufacturer has to be trusted than that the
manufacturer of a safe has to be trusted (at least in the sense that neither
needs to know the keys/combination on any particular machine/safe).  If
machines like this are to be built, they should require some special physical
override to allow the keys to be configured.  A key lock is still good
technology for this purpose:  It's a very well-understood technology, and its
simplicity is a big advantage.  A combination lock might be easier to
integrate securely, for the same basic reason that combination locks became
the standard for bank vaults:  No need for an open passageway from the outside
to the inside.  (In the bank vault case, this passageway was a great way to
get nitroglycerin inside the locking mechanism.)  In either case, you could
(like a bank) use a form of secret sharing, so that only a trusted group of
people - with multiple keys, or multiple parts of the combination - could
access the key setup mode.  Given this, there is no reason why a machine fresh
from the manufacturer need have any embedded keys.

Will machines like this be built?  Probably not, except for special purposes.
The TCPA machines will likely require you (and

Re: Walton's Mountain notaries

2004-01-04 Thread Dan Geer

>   Christmas season is ending - and once again I heard the readings
>   about the edict from Caesar that all people return to their home
>   towns to be counted in a census.  Maybe we can take a lesson from
>   that - and have everyone return to people who have known the
>   person, uninterrupted, from birth to the present in order to get
>   anything notarized.  Anyone who couldn't find such people just
>   couldn't get anything notarized, I guess.

Without having done anything whatsoever to assist,
I'm already pretty well documented in two different
online geneaology sites, and going back several
generations with all the extended family I never
met carefully enumerated.  I'd bet you are, too.

BTW, the guy who played the granddaddy on Walton's
Mountain was my third cousin, twice removed.

--dan

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Seth David Schoen
David Wagner writes:

> To see why, let's go back to the beginning, and look at the threat
> model.  If multiple people are doing shared development on a central
> machine, that machine must have an owner -- let's call him Linus.  Now
> ask yourself: Do those developers trust Linus?
> 
> If the developers don't trust Linus, they're screwed.  It doesn't how
> much attestation you throw at the problem, Linus can always violate their
> security model.  As always, you've got to trust "root" (the system
> administrator); nothing new here.
> 
> Consequently, it seems to me we only need to consider a threat model
> where the developers trust Linus.  (Linus need not be infallible, but the
> developers should believe Linus won't intentionally try to violate their
> security goals.)  In this case, owner-directed attestation suffices.
> Do you see why?  Linus's machine will produce an attestation, signed
> by Linus's key, of what software is running.  Since the developers trust
> Linus, they can then verify this attestation.  Note that the developers
> don't need to trust each other, but they do need to trust the owner/admin
> of the shared box.  So, it seems to me we can get by without third-party
> attestation.

You could conceivably have a PC where the developers don't trust
Linus, but instead trust the PC manufacturer.  The PC manufacturer
could have made it extremely expensive for Linus to tamper with the PC
in order to "violate [the developers'] security model".  (It isn't
logically impossible, it's just extremely expensive.  Perhaps it costs
millions of dollars, or something.)

There are computers like that today.  At least, there are devices that can
run software, that are highly tamper-resistant, and that can do attestations.
(Now there is an important question about what the cost to do a hardware
attack against those devices would be.)  It seems to me to be a good thing
that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
about security for colocated machines, described using devices like
these for colocation where it's not appropriate or desirable to rely on
the physical security of the colocated machine.  Of course, strictly
speaking, all security always relies on physical security.)

I don't know how the key management works in these devices.  If the
keys used to sign attestations are loaded by (or known to) the device
owner, it wouldn't help with the case where the device owner is
untrusted.  If the keys are loaded by the manufacturer, it might
support a model where the owner is untrusted and the manufacturer is
trusted.

-- 
Seth David Schoen <[EMAIL PROTECTED]> | Very frankly, I am opposed to people
 http://www.loyalty.org/~schoen/   | being programmed by others.
 http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003),
   |464 U.S. 417, 445 (1984)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]