Re: example: secure computing kernel needed

2003-12-30 Thread Amir Herzberg
At 04:20 30/12/2003, David Wagner wrote:
Ed Reed wrote:
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software),
I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.
skip
I'm not sure I agree with your last statement. Consider a typical PC 
running some insecure OS and/or applications, which, as you said in earlier 
post, is the typical situation and threat. Since the OS is insecure and/or 
(usually) gives administrator priviledges to insecure applications, an 
attacker may be able to gain control and then modify some code (e.g. 
install trapdoor). With existing systems, this is hard to prevent. However, 
it may be possible to detect this by some secure monitoring hardware, which 
e.g. checks for signatures by the organization's IT department on any 
installed software. A reasonable response when such violation is 
detected/suspected is to report to the IT department (`owner` of the machine).

On the other hand I fully agree with your other comments in this area and 
in particular with...
...
Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an Owner
Override would not disturb these applications.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Jerrold Leichter wrote:
| *Any* secure computing kernel that can do
| the kinds of things we want out of secure computing kernels, can also
| do the kinds of things we *don't* want out of secure computing kernels.

David Wagner wrote:
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.

Jerrold Leichter wrote:
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

Good.  I'm glad we agree that one can build a remote kernel without
remote attestation; that's progress.  But I dispute your claim that remote
attestation is critical to securing our machines.  As far as I can see,
remote attestation seems (with some narrow exceptions) pretty close to
worthless for the most common security problems that we face today.

Your argument is premised on the assumption that it is critical to defend
against attacks where an adversary physically tampers with your machine.
But that premise is wrong.

Quick quiz: What's the dominant threat to the security of our computers?
It's not attacks on the hardware, that's for sure!  Hardware attacks
aren't even in the top ten.  Rather, our main problems are with insecure
software: buffer overruns, configuration errors, you name it.

When's the last time someone mounted a black bag operation against
your computer?  Now, when's the last time a worm attacked your computer?
You got it-- physical attacks are a pretty minimal threat for most users.

So, if software insecurity is the primary problem facing us, how does
remote attestation help with software insecurity?  Answer: It doesn't, not
that I can see, not one bit.  Sure, maybe you can check what software is
running on your computer, but that doesn't tell you whether the software
is any good.  You can check whether you're getting what you asked for,
but you have no way to tell whether what you asked for is any good.

Let me put it another way.  Take a buggy, insecure application, riddled
with buffer overrun vulnerabilities, and add remote attestation.  What do
you get?  Answer: A buggy, insecure application, riddled with buffer
overrun vulnerabilities.  In other words, remote attestation doesn't
help if your trusted software is untrustworthy -- and that's precisely
the situation we're in today.  Remote attestation just doesn't help with
the dominant threat facing us right now.

For the typical computer user, the problems that remote attestation solves
are in the noise compared to the real problems of computer security
(e.g., remotely exploitable buffer overruns in applications).  Now,
sure, remote attestation is extremely valuable for a few applications,
such as digital rights management.  But for typical users?  For most
computer users, rather than providing an order of magnitude improvement
in security, it seems to me that remote attestation will be an epsilon
improvement, at best.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Ed Reed wrote:
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software),

I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.  So the problem statement sounds a little
contrived to me -- but I don't really know anything about kiosks,
so maybe I'm missing something.

In any case, this is an example of an application where owner-directed
remote attestation suffices, so one could support this application
without enabling any of the alleged harms.  (See my previous email.)
In other words, this application is consistent with an Owner Override.

verifying that users have not excercised their god-given
right to install spy-ware and viruses (since they're running with
administrative priviledges, aren't they?),

It sounds like the threat model is that the sysadmins don't trust the
users of the machine.  So why are the sysadmins giving users administrator
or root access to the machine?  It sounds to me like the real problem
here is a broken security architecture that doesn't match up to the
security threat, and remote attestation is a hacked-up patch that's not
going to solve the underlying problems.  But that's just my reaction,
without knowing more.

In any case, this application is also consistent with owner-directed
remote attestation or an Owner Override.

and satisfying a consumer
that the server they're connected to is (or isn't) running software
that
records has adequate security domain protections to protect the users
data (perhaps backup files) the user entrusts to the server.

If I don't trust the administrators of that machine to protect sensitive
data appropriately, why would I send sensitive data to them?  I'm not
sure I understand the threat model or the problem statement.

But again, this seems to be another example application that's compatible
with owner-directed remote attestation or an Owner Override.


Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an Owner
Override would not disturb these applications.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-28 Thread William Arbaugh


I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.
It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.
That is the difference, but my point is that the result with respect to 
the control of your computer is the same. The distant end either 
communicates with you or it doesn't. In authentication, the distant end 
uses your identity to make that decision. In remote attestation, the 
distant end uses your computer's configuration (the computer's identity 
to some degree) to make that same decision.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are 
undesirable,
then the differences between remote attestation and strong 
authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects 
the
potential impact of the technology.  Do you agree?  Did I miss 
something?
Did I mis-interpret your remarks?

My statement was that the two are similar to the degree to which the 
distant end has control over your computer. The difference is that in 
remote attestation we are authenticating a system and we have some 
assurance that the system won't deviate from its programming/policy (of 
course all of the code used in these applications will be formally 
verified :-)). In user authentication, we're authenticating a human and 
we have significantly less assurance that the authenticated subject in 
this case (the human) will follow policy. That is why remote 
attestation and authentication produce different side effects enabling 
different applications: the underlying nature of the authenticated 
subject. Not because of a difference in the technology.



P.S. As a second-order effect, there seems to be an additional 
difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote 
attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

Well- biometrics raises some interesting Gattica issues.  But, I'm not 
going to go there on the list. It is a discussion that is better done 
over a few pints.

So to summarize- I was focusing only on the control issue and noting 
that even though the two technologies enable different applications 
(due to the assurance that we have in how the authenticated subject 
will behave), they are very similar in nature.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-26 Thread Seth David Schoen
William Arbaugh writes:

 If that is the case, then strong authentication provides the same 
 degree of control over your computer. With remote attestation, the 
 distant end determines if they wish to communicate with you based on 
 the fingerprint of your configuration. With strong authentication, the 
 distant end determines if they wish to communicate with you based on 
 your identity.

I'm a little confused about why you consider these similar.  They seem
very different to me, particularly in the context of mass-market
transactions, where a service provider is likely to want to deal with
the general public.

While it's true that service providers could try to use some demand
some sort of PKI credential as a way of getting the true name of those
they deal with, the particular things they can do with a true name are
much more limited than the things they could do with proof of
someone's software configuration.  Also, in the future, the cost of
demanding a true name could be much higher than the cost of demanding
a proof of software identity.

To give a trivial example, I've signed this paragraph using a PGP
clear signature made by my key 0167ca38.  You'll note that the Version
header claims to be PGP 17.0, but in fact I don't have a copy of PGP
17.0.  I simply modified that header with my text editor.  You can tell
that this paragraph was written by me, but not what software I used to
write it.

As a result, you can't usefully expect to take any action based on my
choice of software -- but you can take some action based on whether
you trust me (or the key 0167ca38).  You can adopt a policy that you
will only read signed mail -- or only mail signed by a key that Phil
Zimmermann has signed, or a key that Bruce Lehman has signed -- but
you can't adopt a policy that you will only read mail written by mutt
users.  In the present environment, it's somewhat difficult to use
technical means to increase or diminish others' incentive to use
particular software (at least if there are programmers actively
working to preserve interoperability).

Sure, attestation for platform identity and integrity has some things
in common with authentication of human identity.  (They both use
public-key cryptography, they can both use a PKI, they both attempt to
prove things to a challenger based on establishing that some entity
has access to a relevant secret key.)  But it also has important
differences.  One of those differences has to do with whether trust is
reposed in people or in devices!  I think your suggestion is tantamount
to saying that an electrocardiogram and a seismograph have the same
medical utility because they are both devices for measuring and
recording waveforms.

 I just don't see remote attestation as providing control over your 
 computer provided the user/owner has control over when and if remote 
 attestation is used. Further, I can think of several instances where 
 remote attestation is a good thing. For example, a privacy P2P file 
 sharing network. You wouldn't want to share your files with an RIAA 
 modified version of the program that's designed to break the anonymity 
 of the network.

This application is described in some detail at

http://www.eecs.harvard.edu/~stuart/papers/eis03.pdf

I haven't seen a more detailed analysis of how attestation would
benefit particular designs for anonymous communication networks
against particular attacks.  But it's definitely true that there are
some applications of attestation to third parties that many computer
owners might want.  (The two that first come to mind are distributed
computing projects like [EMAIL PROTECTED] and network games like Quake,
although I have a certain caution about the latter which I will
describe when the video game software interoperability litigation I'm
working on is over.)

It's interesting to note that in this case you benefit because you
received an attestation, not because you gave one (although the
network is so structured that giving an attestation is arranged to be
the price of receiving one: Give me your name, horse-master, and I
shall give you mine!).

The other thing that end-users might like is if _non-peer-to-peer_
services they interacted with could prove properties about themselves
-- that is, end-users might like to receive rather than to give
attestations.  An anonymous remailer could give an attestation to
prove that it is really running the official Mixmaster and the
official Exim and not a modified Mixmaster or modified Exim that
try to break anonymity.  Apple could give an attestation proving that
it didn't have the ability to alter or to access the contents of
your data while it was stored by its Internet hard drive service.

One interesting question is how to characterize on-line services where
users would be asked for attestation (typically to their detriment, by
way of taking away their choice of software) as opposed to on-line
services where users would be able to ask for attestation (typically
to their 

Re: example: secure computing kernel needed

2003-12-23 Thread David Wagner
William Arbaugh  wrote:
David Wagner writes:
 As for remote attestion, it's true that it does not directly let a remote
 party control your computer.  I never claimed that.  Rather, it enables
 remote parties to exert control over your computer in a way that is
 not possible without remote attestation.  The mechanism is different,
 but the end result is similar.

If that is the case, then strong authentication provides the same 
degree of control over your computer. With remote attestation, the 
distant end determines if they wish to communicate with you based on 
the fingerprint of your configuration. With strong authentication, the 
distant end determines if they wish to communicate with you based on 
your identity.

I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.

It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are undesirable,
then the differences between remote attestation and strong authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects the
potential impact of the technology.  Do you agree?  Did I miss something?
Did I mis-interpret your remarks?



P.S. As a second-order effect, there seems to be an additional difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread Jerrold Leichter
|  We've met the enemy, and he is us.  *Any* secure computing kernel
|  that can do
|  the kinds of things we want out of secure computing kernels, can also
|  do the
|  kinds of things we *don't* want out of secure computing kernels.
| 
|  I don't understand why you say that.  You can build perfectly good
|  secure computing kernels that don't contain any support for remote
|  attribution.  It's all about who has control, isn't it?
| 
| There is no control of your system with remote attestation. Remote
| attestation simply allows the distant end of a communication to
| determine if your configuration is acceptable for them to communicate
| with you.
|
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
|
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
|
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-
evidence.

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a self-check passed LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically in front of it.  Or you
can 

Re: example: secure computing kernel needed

2003-12-22 Thread Ed Reed
Remote attestation has use in applications requiring accountability of
the user, as a way for cooperating processes to satisfy themselves
that
configurations and state are as they're expected to be, and not
screwed
up somehow.
 
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software), verifying that users have not excercised their god-given
right to install spy-ware and viruses (since they're running with
administrative priviledges, aren't they?), and satisfying a consumer
that the server they're connected to is (or isn't) running software
that
records has adequate security domain protections to protect the users
data (perhaps backup files) the user entrusts to the server.
 
What I'm not sure of is whether there are any anonymous / privacy
enhancing scenarios in which remote attestation is useful.  Well, the
last case, above, where the server is attesting to the client could
work.
But what about the other way around.  The assumption I have is that
any remote attestation, even if anonymous, still will leave a trail
that might be used by forensic specialists for some form of traffic
analysis, if nothing else.
 
In that case, you'd need to trust your trusted computing system
not to provide remote attestation without your explicit assent.
 
I'd really like to see an open source effort to provide a high
assurance
TPM implementation, perhaps managed through a Linux 2.6 / LSM /
TPM driver talking to a TPM module.  Yes, the TPM identity and
integrity
will still be rooted in its manufacturer (IBM, Intel, Asus, SiS,
whomever).
But hell, we're already trusting them not to put tcpstacks into the
BIOS
for PAL chips to talk to their evil bosses back in [fill in location of
your
favorite evil empire, here]. Oh, wait a minute - Phoenix is working
on that, too, aren't they?
 
I see the TPM configuration management tool as a way to provide
a trusted boot path, complete with automagical inventory of approved
hardware devices, so that evaluated operating systems, like Solaris
and Linux, can know whether they're running on hardware whose firmware
and circuitry are known (or believed) not to have been subverted, or to
have
certain EMI / Tempest characteristics.  Mass market delivery of
what are ususally statically configured systems that still retain
their
C2/CC-EAL4 ratings.
 
But more important is where TPM and TCPA lead Intel and IBM, towards
increasing virtualization of commodity hardware, like Intel's LeGrand
strategy to restore a trusted protection ring (-1) to their
processors,
which will make it easier to get real, proper virtualization with
trusted
hypervisors back into common use.
 
The fact that Hollywood thinks they can use the technology, and thus
they're willing to underwrite its development, is fortuitous, as long
as
the trust is based on open transparent reviews and certifications.
 
Maybe the FSF and EFF will create their own certification program, to
review and bless TPM ring -1 implementations, just to satsify the
slashdot crowd...
 
Maybe they should.
 
Ed

 William Arbaugh [EMAIL PROTECTED] 12/18/2003 5:33:00 PM 


On Dec 16, 2003, at 5:14 PM, David Wagner wrote:

 Jerrold Leichter  wrote:
 We've met the enemy, and he is us.  *Any* secure computing kernel 
 that can do
 the kinds of things we want out of secure computing kernels, can
also 
 do the
 kinds of things we *don't* want out of secure computing kernels.

 I don't understand why you say that.  You can build perfectly good
 secure computing kernels that don't contain any support for remote
 attribution.  It's all about who has control, isn't it?


There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you. As such, remote attestation allows communicating parties to 
determine with whom they communicate or share services. In that 
respect, it is just like caller id. People should be able to either 
attest remotely, or block it just like caller id. Just as the distant 
end can choose to accept or not accept the connection.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread David Wagner
William Arbaugh  wrote:
On Dec 16, 2003, at 5:14 PM, David Wagner wrote:
 Jerrold Leichter  wrote:
 We've met the enemy, and he is us.  *Any* secure computing kernel 
 that can do
 the kinds of things we want out of secure computing kernels, can also 
 do the
 kinds of things we *don't* want out of secure computing kernels.

 I don't understand why you say that.  You can build perfectly good
 secure computing kernels that don't contain any support for remote
 attribution.  It's all about who has control, isn't it?

There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you.

But you missed my main point.  Leichter claims that any secure kernel is
inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
My main point is that this is simply not so.

There are two very different pieces here: that of a secure kernel, and
that of remote attestation.  They are separable.  TCPA and Palladium
contain both pieces, but that's just an accident; one can easily imagine
a Palladium-- that doesn't contain any support for remote attestation
whatsoever.  Whatever you think of remote attestation, it is separable
from the goal of a secure kernel.

This means that we can have a secure kernel without all the harms.
It's not hard to build a secure kernel that doesn't provide any form of
remote attestation, and almost all of the alleged harms would go away if
you remove remote attestation.  In short, you *can* have a secure kernel
without having all the kinds of things we don't want.  Leichter's claim
is wrong.

This is an important point.  It seems that some TCPA and Palladium
advocates would like to tie together security with remote attestion; it
appears they would like you to believe you can't have a secure computer
without also enabling DRM, lock-in, and the other harms.  But that's
simply wrong.  We can have a secure computer without enabling all the
alleged harms.  If we don't like the effects of TCPA and Palladium,
there's no reason we need to accept them.  We can have perfectly good
security without TCPA or Palladium.

As for remote attestion, it's true that it does not directly let a remote
party control your computer.  I never claimed that.  Rather, it enables
remote parties to exert control over your computer in a way that is
not possible without remote attestation.  The mechanism is different,
but the end result is similar.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-18 Thread David Wagner
Jerrold Leichter  wrote:
We've met the enemy, and he is us.  *Any* secure computing kernel that can do
the kinds of things we want out of secure computing kernels, can also do the
kinds of things we *don't* want out of secure computing kernels.

I don't understand why you say that.  You can build perfectly good
secure computing kernels that don't contain any support for remote
attribution.  It's all about who has control, isn't it?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Paul A.S. Ward
I'm not sure why no one has considered the PC banking problem to be a
justification for secure computing.  Specifically, how does a user know
their computer has not been tampered with when they wish to use it for
banking access.
Paul

John S. Denker wrote:

Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
Now, in contrast, here is an application that begs for
a secure computing kernel, but has nothing to do with
microsoft and nothing to do with copyrights.
Scenario:  You are teaching chemistry in a non-anglophone
country.  You are giving an exam to see how well the
students know the periodic table.
 -- You want to allow students to use their TI-83 calculators
for *calculating* things.
 -- You want to allow the language-localization package.
 -- You want to disallow the app that stores the entire
periodic table, and all other apps not explicitly
approved.
The hardware manufacturer (TI) offers a little program
that purports to address this problem
  http://education.ti.com/us/product/apps/83p/testguard.html
but it appears to be entirely non-cryptologic and therefore
easily spoofed.
I leave it as an exercise for the reader to design a
calculator with a secure kernel that is capable of
certifying something to the effect that no apps and
no data tables (except for ones with the following
hashes) have been accessible during the last N hours.
Note that I am *not* proposing reducing the functionality
of the calculator in any way.  Rather I am proposing a
purely additional capability, namely the just-mentioned
certification capability.
I hope this example will advance the discussion of secure
computing.  Like almost any powerful technology, we need
to discuss
 -- the technology *and*
 -- the uses to which it will be put
... but we should not confuse the two.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]


--

Paul A.S. Ward, Assistant Professor  Email: [EMAIL PROTECTED]
University of Waterloo  [EMAIL PROTECTED]
Department of Computer Engineering   Tel: +1 (519) 888-4567 ext.3127
Waterloo, OntarioFax: +1 (519) 746-3077
Canada N2L 3G1   URL: http://www.ccng.uwaterloo.ca/~pasward


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Bill Stewart
At 02:41 PM 12/14/2003 +, Dave Howe wrote:
Paul A.S. Ward wrote:
 I'm not sure why no one has considered the PC banking problem to be a
 justification for secure computing.  Specifically, how does a user
 know their computer has not been tampered with when they wish to use
 it for banking access.
I think PC banking is an argument *against* Secure Computing as currently
proposed - there is no way to discover if there is a nasty running in
protected memory or removing it if there is.
Agreed.  It's a better argument for booting from a known CDROM distribution.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Anne Lynn Wheeler
At 07:25 PM 12/11/2003 -0500, Paul A.S. Ward wrote:
I'm not sure why no one has considered the PC banking problem to be a
justification for secure computing.  Specifically, how does a user know
their computer has not been tampered with when they wish to use it for
banking access.
actually the EU FINREAD (financial reader) standard is quite directed at 
this area. basically a secure entry/display\token-interface device. part of 
the issue is not skimming any pin-entry that may be assumed as possible 
with just about all keyboard-based entry (aka tamper evident device  
supposedly somewhat consumer equivalent of the TSM ... trusted security 
module and tamper evident guidelines for point-of-sale terminals). In 
effect, finread is isolating some set of secure components into a tamper 
evident housing that has something akin to a trusted security module.

the other aspect somewhat shows up in the digital signature area. 
fundamentally a digital signature may be used for authenticating (and 
message integrity) ... but not, by itself as to agreement in the legal 
signature sense. the issue is how to create an environment/infrastructure 
for supporting both straight-forward authentication as well as 
intention/agreement

in theory finread has the ability to securely display the value of a 
transaction (and possibly other necessary details) and then requires a PIN 
entry after the display as evidence of

1) something you know authentication
2) being able to infer agreement with the transaction.
pretty much assumed is that finread implies some sort of token acceptor 
device ... which in turn implies a something you have token authentication.

so finread is attempting to both address two-factor authentication (and 
possibly three if biometric is also supported) as well as establish some 
environment related for inferring agreement/intention/etc as required per 
legal signature.

possibly overlooked in the base eu finread work is being able to prove that 
the transaction actually took place with a real finread device as opposed 
to some other kind of environment. In the (financial standard) X9A10 
working group on the X9.59 financial standard for all electronic retail 
payments we spent some amount of time on not precluding that the signing 
environment could also sign the transaction i.e.

1) amount displayed on secure secure display,
2) pin/biometric securely entered (after display occurs)
3) token digitally signs (after pin/biometric entered)
4) finread terminal digital signs
the 2nd  3rd items (alone) are two (or three) factor authentication. 
however, in conjunction with the first and fourth items some level of 
assurance that the person agrees with the transaction.

lots of past finread references:
http://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? 
Photo ID's and Payment Infrastructure
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, 
here's your private key
http://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
http://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative 
to PKI?
http://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and 
their users [was Re: Cryptogram:  Palladium Only for DRM]
http://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has 
conspicuously failed to fix
http://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel 
Borenstein: Carnivore's Magic Lantern
http://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
http://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
http://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
http://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
http://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet 
Banking