Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

2003-12-23 Thread Ed Reed
 Ian Grigg [EMAIL PROTECTED] 12/20/2003 12:15:51 PM 

One of the (many) reasons that PKI failed is
that businesses simply don't outsource trust.

Of course they do.  Examples:
 
DB and other credit reporting agencies.
SEC for fair reporting of financial results.
International Banking Letters of Credit when no shared root of trust
exists.
Errors and Ommissions Professional Liability insurance for consultants
you don't know.
Workman's Compensation insurance for independent contractors you don't
know.
 
The point is that the real world has monitized risk.  But the
crytpo-elite have concentrated too hard on eliminating environmental
factors from proofs of correctness of algorithms, protocols, and most
importantly, business processes.
 
Crypto is not business-critical.  It's the processes its supposed to be
protecting that are, and those are the ones that are insured.
 
Legal and regulatory frameworks define how and where liability can be
assigned, and that allows insurance companies to factor in stop-loss
estimates for their exposure.  Without that, everything is a crap
shoot.
 
Watching how regulation is evolving right now, we may not see explicit
liability assignments to software vendors for their vulnerabilities,
whether for operating systems or for S/MIME email clients.  Those are
all far too limited in what they could offer, anyway.
 
What's happening, instead, is that consumers of those products are
themselves facing regulatory pressure to assure their customers and
regulators that they're providing adequate systematic security through
technology as well as business policies, procedures and (ultimately)
controls (ie, auditable tests for control failures and adequacy).  When
customers can no longer say gee, we collected all this information, and
who knew our web server wouldn't keep it from being published on the
NYTimes classified pages?, then vendors will be compelled to deliver
pieces of the solution that allow THE CUSTOMER (product consumer) to
secure their environments.
 
Get ready.  Trusted Third Party evaluations, like FIPS 140 is for
Crypto, will be the thing insurance companies look to for guidance in
factoring their risk exposure when asked to provide warranty coverage to
businesses using technology - just like they did to Underwriters
Laboratories for electrical appliances, just like they do to DB for
commercial credit processing, just like they do to MasterCard and VISA
for consumer credit processing.
 
And before you say well, that doesn't apply to internal company
security, ask yourself how many companies outsource physical security
to Brinks or some other security-guard employeement agency.  They can do
that, too, because of other insurance (personal bonds) that help them
lay off the exposure to misplaced trust.
 
Trust is heavily outsourced.  Only the very large or very foolish think
they can go it alone.  And the very large generally have governments
in their pockets to  help provide the stop-loss limits for their
exposure.
 
No?
 
Ed

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PKI root signing ceremony, etc.

2003-12-23 Thread Dan Geer

One approach to securing infrequent signing or working keys from a 
corporate master certificate is to store the certificate in a bank 
safe deposit box. The certificate generation software (say on a self 
booting CD or perhaps an entire laptop) could be stored in the safe 
deposit box as well. The certificate signing would take place at the 
bank, either in one of the small rooms they provide or in a borrowed 
conference room.


Dare I mention the CertCo/Identrus threshold crypto
in this context?  CertCo certainly nailed all the
parts of this, e.g., fragment generation in abstentia.

--dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: I don't know PAIN...

2003-12-23 Thread Raymond Lillard
Ben Laurie wrote:
Ian Grigg wrote:
What is the source of the acronym PAIN?
Lynn said:
... A security taxonomy, PAIN:
* privacy (aka thinks like encryption)
* authentication (origin)
* integrity (contents)
* non-repudiation
I.e., its provenance?

Google shows only a few hits, indicating
it is not widespread.
Probably because non-repudiation is a stupid idea: 
http://www.apache-ssl.org/tech-legal.pdf.
OK, I'm a mere country mouse when it comes to cryptography,
so be kind.
I have read most of the above paper on non-repudiation and
noticed on p3 the following footnote:
Note that there is no theoretical reason that it should be
possible to figure out the public key given the private key,
either, but it so happens that it is generally possible to
do so
So what's this generally possible business about?
A few references will do.
Thanks,
Ray

begin:vcard
fn:Raymond Lillard
n:Lillard;Raymond
email;internet:[EMAIL PROTECTED]
x-mozilla-html:FALSE
version:2.1
end:vcard



Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Anne Lynn Wheeler
At 03:03 PM 12/21/2003 -0800, Seth David Schoen wrote:
Some people may have read things like this and mistakenly thought that
this would not be an opt-in process.  (There is some language about
how the user's platform takes various actions and then responds to
challenges, and perhaps people reasoned that it was responding
autonomously, rather than under its user's direction.)
my analogy ... at least in online scenario has been to wild, wild west 
before there were traffic conventions, traffic signs, lane markers, traffic 
lights, standards for vehicles ... misc. traffic rules about operating an 
unsafe vehicle and driving recklessly, various minimums about traffic 
regulations, and things like insurance requirements to cover the cost of 
accidents. infected machines that do distributed DOS  attacks ... might be 
considered analogous to large overloaded trucks w/o operational breaks 
(given rise to truck inspection and weighing stations).  many ISPs are 
already monitoring, accounting and controlling various kinds of activity 
with respect to amount of traffic, simultaneous log-ins, etc.  If there are 
sufficient online incidents ... then there could be very easy to declare 
machines that become infected and are used as part of various unacceptable 
behavior to have then declared unsafe vehicles and some sort of insurace be 
required to cover the costs of associated with unsafe and reckless driving 
on the internet. Direct costs to individuals may go up ... but the unsafe 
and reckless activities currently going on represent enormous 
infrastructure costs.  Somewhat analogy to higher insurance premiums for 
less safe vehicles, government minimums for crash tests, bumper 
conventions, seat belts, air bags, etc.

part of the issue is that some number of the platforms never had original 
design point of significant interaction on a totally open and free internet 
(long ago and far away, vehicles that didn't have bumpers, crash tests, 
seat belts, air bags, safety glass, etc). Earlier in the original version 
of this thread ... I made reference to some number of systems from 30 or 
more years ago ... that were designed to handle such environments  and 
had basic security designed in from the start ... were found to be not 
subject to majority of the things that are happening to lots of the current 
internet connected platforms.
http://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel 
needed

misc. past analogies to unsafe and reckless driving on the internet:
http://www.garlic.com/~lynn/aadsm14.htm#14 blackhole spam = mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/aadsm14.htm#15 blackhole spam = mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#28 Internet like city w/o traffic 
rules, traffic signs, traffic lights  and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#29 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#31 Internet like city w/o traffic 
rules, traffic signs, traffic lights   and traffic enforcement
http://www.garlic.com/~lynn/2002p.html#27 Secure you PC or get kicked off 
the net?
http://www.garlic.com/~lynn/2003i.html#17 Spam Bomb
http://www.garlic.com/~lynn/2003m.html#21 Drivers License required for surfing?

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread David Wagner
William Arbaugh  wrote:
David Wagner writes:
 As for remote attestion, it's true that it does not directly let a remote
 party control your computer.  I never claimed that.  Rather, it enables
 remote parties to exert control over your computer in a way that is
 not possible without remote attestation.  The mechanism is different,
 but the end result is similar.

If that is the case, then strong authentication provides the same 
degree of control over your computer. With remote attestation, the 
distant end determines if they wish to communicate with you based on 
the fingerprint of your configuration. With strong authentication, the 
distant end determines if they wish to communicate with you based on 
your identity.

I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.

It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are undesirable,
then the differences between remote attestation and strong authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects the
potential impact of the technology.  Do you agree?  Did I miss something?
Did I mis-interpret your remarks?



P.S. As a second-order effect, there seems to be an additional difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-23 Thread Amir Herzberg
Ben, Carl and others,

At 18:23 21/12/2003, Carl Ellison wrote:

 and it included non-repudiation which is an unachievable,
 nonsense concept.
Any alternative definition or concept to cover what protocol designers 
usually refer to as non-repudiation specifications? For example 
non-repudiation of origin, i.e. the ability of recipient to convince a 
third party that a message was sent (to him) by a particular sender (at 
certain time)?

Or - do you think this is not an important requirement?
Or what?
Best regards,

Amir Herzberg
Computer Science Department, Bar Ilan University
Lectures: http://www.cs.biu.ac.il/~herzbea/book.html
Homepage: http://amir.herzberg.name
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-23 Thread Stefan Kelm
 Let's just leave the term non-repudiation to be used by people who don't
 understand security, but rather mouth things they've read in books that
 others claim are authoritative.  There are lots of those books listing
 non-repudiation as a feature of public key cryptography, for example,
 and many listing it as an essential security characteristic.  All of that
 is wrong, of course, but it's a test for the reader to see through it.

Ah. That's why they're trying to rename the corresponding keyUsage bit
to contentCommitment then:

  http://www.pki-page.info/download/N12599.doc

:-)

Cheers,

Stefan.
---
Dipl.-Inform. Stefan Kelm
Security Consultant

Secorvo Security Consulting GmbH
Albert-Nestler-Strasse 9, D-76131 Karlsruhe

Tel. +49 721 6105-461, Fax +49 721 6105-455
E-Mail [EMAIL PROTECTED], http://www.secorvo.de
---
PGP Fingerprint 87AE E858 CCBC C3A2 E633 D139 B0D9 212B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Antonomasia
From: Carl Ellison [EMAIL PROTECTED]

   Some TPM-machines will be owned by people who decide to do what I
 suggested: install a personal firewall that prevents remote attestation.

How confident are you this will be possible ?  Why do you think the
remote attestation traffic won't be passed in a widespread service
like HTTP - or even be steganographic ?

-- 
##
# Antonomasia   ant notatla.org.uk   #
# See http://www.notatla.org.uk/ #
##

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before

2003-12-23 Thread Anne Lynn Wheeler
At 07:34 PM 12/22/2003 -0700, Ed Reed wrote:
Of course they do.  Examples:

DB and other credit reporting agencies.
SEC for fair reporting of financial results.
International Banking Letters of Credit when no shared root of trust
exists.
Errors and Ommissions Professional Liability insurance for consultants
you don't know.
Workman's Compensation insurance for independent contractors you don't
know.
I don't think that trust checking was so much of the question  a not 
uncommon scenario was

1) institution set up an account possibly that included checking with 3rd 
party trust agencies
2) did various kinds of online transactions where the actual transaction 
included account-only information
3) got an offer from a certification authority to move into the modern world
a) send the CA a copy of the institutions account database
b) the ca would convert the information in each account record into a 
certificate
c) each certificate would be digitally signed by the CA
d) the CA would returned each digitally signed transformed account 
record back to the
institution and only charge a $100/certificate
4) the institution was to convert from modern online transactions to 
archaic offline transactions based on information in the certificate
5) the certificate would be a x.509 identity certificate that contain all 
of the account entity's identification information which would flow around 
attached to every transaction

fundamentally

1) x.509 certificates broadcast all over the world attacked to every 
transaction were in serious violation of all sorts of privacy issues
2) certificates were fundamentally designed to address a trust issue in 
offline environments where a modicum of static, stale data was better than 
nothing
3) offline, certificate oriented static stale processing was a major step 
backward compared to online, timely, dynamic processing.
4) the traditional outsourced trust has the relying-party contracted with 
the trust agency so that there is some form of legal obligation, the 
traditional CA model has no such legal obligation existing between the 
relying-party and the trust/certifying agency (the contract is frequently 
between the trust agency and the key owner, not the relying-party).

In the mid to late 90s ... some financial institutions attempted to salvage 
some of the paradigm (because of the severe privacy and liability issues) 
by going to relying-party-only, certificates for online transactions. 
However, it is trivial to show that the static, stale information in the 
relying-party-only certificate was a trivial subset of the information that 
would be accessed in the real account record for the online transactions 
... and therefor it was trivial to show that static, stale certificates 
were redundant and superfulous. misc. past posts regarding 
relying-party-only scenario:
http://www.garlic.com/~lynn/subpubkey.html#rpo

I think that the current federal gov.PKI tries to address the legal 
obligation issue ... by creating a legal situation where essentially all 
the authorized CA operators are effectively agents of the federal PKI ... 
and all the relying parties have contracts with the federal PKI ... which 
simulates a legal obligation between the issuer of the certificate and the 
relying-parties.

In something like the DB scenario ... the relying party contracts for some 
information with DB about the entity that the relying party is interested 
in. In many of the traditional 3rd party CA-PKIs, there may be absolutely 
no legal relationship between the CA issuing the certificate (trust 
information) and any of the relying parties that are relying on the trust 
information i.e. the contract is between the CA issuing the certificate ... 
and the entity that the certificate is about. Since the entity (that the 
trust information is about) may be the party paying for the trust 
information ... they may have some motivation to shop around and get the 
most favorable report. Lets say I was applying for a loan and the loan 
institution needed a credit report. Rather than the loan institution 
contracting for the credit report, they rely on one supplied by the loan 
applicate. The loan applicant is free to choose from all the credit 
reporting agencies which credit report that they will buy for supplying to 
the loan institution.

random past threads on trust propagation:
http://www.garlic.com/~lynn/aadsm14.htm#42 An attack on paypal
http://www.garlic.com/~lynn/aadsm14.htm#45 Keyservers and Spam
http://www.garlic.com/~lynn/aadsm14.htm#46 An attack on paypal
http://www.garlic.com/~lynn/aadsm15.htm#26 SSL, client certs, and MITM (was 
WYTM?)
http://www.garlic.com/~lynn/aadsm15.htm#32 VS: On-line signature standards
http://www.garlic.com/~lynn/aadsm15.htm#33 VS: On-line signature standards
http://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
http://www.garlic.com/~lynn/aadsm2.htm#pkikrb PKI/KRB
http://www.garlic.com/~lynn/2001g.html#40 

Re: example: secure computing kernel needed

2003-12-23 Thread Jerrold Leichter
|  We've met the enemy, and he is us.  *Any* secure computing kernel
|  that can do
|  the kinds of things we want out of secure computing kernels, can also
|  do the
|  kinds of things we *don't* want out of secure computing kernels.
| 
|  I don't understand why you say that.  You can build perfectly good
|  secure computing kernels that don't contain any support for remote
|  attribution.  It's all about who has control, isn't it?
| 
| There is no control of your system with remote attestation. Remote
| attestation simply allows the distant end of a communication to
| determine if your configuration is acceptable for them to communicate
| with you.
|
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
|
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
|
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-
evidence.

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a self-check passed LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically in front of it.  Or you
can 

Re: Ousourced Trust (was Re: Difference between TCPA-Hardware anda smart card and something else before

2003-12-23 Thread Ian Grigg
Ed Reed wrote:
 
  Ian Grigg [EMAIL PROTECTED] 12/20/2003 12:15:51 PM 
 
 One of the (many) reasons that PKI failed is
 that businesses simply don't outsource trust.
 
 Of course they do.  Examples:
 
 DB and other credit reporting agencies.
 SEC for fair reporting of financial results.
 International Banking Letters of Credit when no shared root of trust
 exists.
 Errors and Ommissions Professional Liability insurance for consultants
 you don't know.
 Workman's Compensation insurance for independent contractors you don't
 know.


Of course they don't.  What they do is they
outsource the collection of certain bases of
information, from which to make trust decisions.
The trust is still in house.  The reports are
acquired from elsewhere.

That's the case for DB and credit reporting.
For the SEC, I don't understand why it's on
that list.  All they do is offer to store the
filings, they don't analyse them or promise
that they are true.  They are like a library.

International Banking Letters of Credit - that's
money, not trust.  What happens there is that
the receiver gets a letter, and then takes it
to his bank.  If his bank accepts it, it is
acceptable.  The only difference between using
that and a credit card, at a grand level, is
that you are relying on a single custom piece
of paper, with manual checks at every point,
rather than a big automated system that mechanises
the letter of credit into a piece of plastic.
(Actually, I'm totally unsure on these points,
as I've never examined in detail how they work :-)

Insurance - is not the outsourcing of trust,
but the sharing of risks.



Unfortunately, most of the suppliers of these
small factors in the overall trust process of
a company, PKI included, like to tell the
companies that they can, and are, outsourcing
trust.  That works well, because, if the victim
believes it (regardless of whether he is doing
it) then it is easier to sell some other part
of the services.  It's basically a technique
to lull the customer into handing over more
cash without thinking.

But, make no mistake!  Trust itself - the way
it marshalls its information and makes its
decisions - is part of the company's core
business.  Any business that outsources its
core specialties goes broke eventually.

And, bringing this back to PKI, the people
who pushed PKI fell for the notion that
trust could be outsourced.  They thus didn't
understand what trust was, and consequently
confused the labelling of PKI as trust with
the efficacy of PKI as a useful component
in any trust model (see Lynn's post).


 The point is that the real world has monitized risk.  But the
 crytpo-elite have concentrated too hard on eliminating environmental
 factors from proofs of correctness of algorithms, protocols, and most
 importantly, business processes.


I agree with this, and all the rest.  The no-
risk computing school is fascinated with the
possibility of eliminating entire classes of
risk, so much so that they often introduce
excessive business costs, which results in
general failures of the whole crypto process.

In theory, it's a really good thing to
eliminate classes of attack.  But it can
carry a heavy cost, in any practical
implementation.

We are seeing a lot more attention to
opportunistic cryptography, which is a good
thing.  The 90s was the decade of the no-risk
school, and the result was pathetically low
levels of adoption.  In the future, we'll see
a lot more bad designs, and a lot more corners
cut.  This is partly because serious crypto
people - those you call the crypto-elite - have
burnt out their credibility and are rarely
consulted, and partly because it simply costs
too much for projects to put in a complete
and full crypto infrastructure in the early
stages.


 Crypto is not business-critical.  It's the processes its supposed to be
 protecting that are, and those are the ones that are insured.
 
 Legal and regulatory frameworks define how and where liability can be
 assigned, and that allows insurance companies to factor in stop-loss
 estimates for their exposure.  Without that, everything is a crap
 shoot.
 
 Watching how regulation is evolving right now, we may not see explicit
 liability assignments to software vendors for their vulnerabilities,
 whether for operating systems or for S/MIME email clients.  Those are
 all far too limited in what they could offer, anyway.
 
 What's happening, instead, is that consumers of those products are
 themselves facing regulatory pressure to assure their customers and
 regulators that they're providing adequate systematic security through
 technology as well as business policies, procedures and (ultimately)
 controls (ie, auditable tests for control failures and adequacy).  When
 customers can no longer say gee, we collected all this information, and
 who knew our web server wouldn't keep it from being published on the
 NYTimes classified pages?, then vendors will be compelled to deliver
 pieces of the solution that allow THE CUSTOMER (product 

Re: IP2Location.com Releases Database to Identify IP's Geography

2003-12-23 Thread Ian Grigg
Rich Salz wrote:
 
  The IP2Location(TM) database contains more than 2.5 million records for all
  IP addresses. It has over 95 percent matching accuracy at the country
  level. Available at only US$499 per year, the database is available via
  download with free twelve monthly updates.
 
 And since the charge is per-server, not per-query, you could easily
 set up an international free service on a big piece of iron.


These have existed for some time.  Google knows
where they are, although they were a little tough
to find.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-23 Thread Anne Lynn Wheeler
At 08:23 AM 12/21/2003 -0800, Carl Ellison wrote:
That's an interesting definition, but you're describing a constraint on the
behavior of a human being.  This has nothing to do with cryptosystem choice
or network protocol design.  What mechanisms do you suggest for enforcing
even the constraint you cite?  Of course, that constraint isn't enough.  In
order to achieve non-repudiation, the way it is defined, you need to prove
to a third party (the judge) that a particular human being knowingly caused
a digital signature to be made.  A signature can be made without the
conscious action of the person to whom that key has been assigned in a
number of ways, none of which includes negligence by that person.
total aside ... i just did jury duty in criminal case last week

a mammal taxonomy can have
* humans
* horses
* mice
which doesn't mean that all mammal's have hooves, and correspondingly, all 
security doesn't have to have non-repudiation.

if the authorizations and/or permissions require for somebody to be an 
employee ... it is possible to authenticate somebody as being an employee 
w/o having to authenticate who they are ... just sufficient to authenticate 
them as whether or not they are allowed to do what they are allowed to do.

now, if you have 10,000 people that are authorized to do something ... and 
you have no tracking about what any specific person does  then if some 
fraud takes place  you may have no grounds whether to suspect any of 
the 10,000 over any of the others.  However, if you have a policy that 
employees are strictly not suppose to share passwords and can get fired if 
they do  and some fraud process takes placed ... done by an entity 
entering a specific password  there would possibly be at least 
sufficient grounds to at least get a search warrant. The password by itself 
might not be sufficient to convict beyond a reasonable doubt ... but the 
audit trail might at least help point the investigation in the correct 
direction and also be admitted as circumstantial evidence. The defense 
attorneys in their opening statements said something about the prosecution 
showing means, motive, opportunity and misc. other things.

in any case, I would claim that both human and non-repudiation issues are 
part of security.

I wouldn't go so far as to say that just because a certification authority 
turned on a non-repudiation bit in a certificate  and had no means at 
all of influencing human behavior, that just because the bit was turned on 
... it, in anyway had anything to do with non-repducation.

there is recent thread in pkx mailing list about the name of the 
non-repudiation bit in a certificate being depreciated. There seems to be 
two separate issues ... 1) calling the bit non-repudiation isn't 
consistent with the meaning of the bit and 2) the semantics of what the bit 
supposedly controls.
--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]