Re: PKI root signing ceremony, etc.

2003-12-15 Thread Rich Salz
 *shrug* it doesn't retroactively enforce the safety net - but that's ok,
 most MS products don't either :)

The whole point is to enhance common practice, not stay at the lowest
common denominator.

 Key management and auditing is pretty much external to the actual software
 regardless of which solution you use I would have thought.

You'd be wrong. :)  I did just download and use XCA for a little bit.
It's practically impossible to audit.  Every key in the database is
protected with the same password.  The system ask for the password
as soon as it starts up.  If I leave the program running while
I leave my computer, I'm screwed.  The key-holder isn't asked to
confirm each signing -- there's no *ceremony* -- and they never
enter the password after the program starts.  For any kind of root
these are all very bad.

XCA is pretty nice for a Level-2 or small Level-1 CA.  The template
management, etc., is pretty good.  (Having them tied to the key database,
and having the keys be unlocked while making cert requests, are both
real bad ideas, however.)

/r$
--
Rich Salz  Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway  http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PKI root signing ceremony, etc.

2003-12-15 Thread Peter Gutmann
Dave Howe [EMAIL PROTECTED] writes:

Key management and auditing is pretty much external to the actual software
regardless of which solution you use I would have thought.

Not necessarily.  I looked at this in an ACSAC'2000 paper (available from
http://www.acsac.org/2000/abstracts/18.html).  This uses a TP-capable database
as its underlying engine, providing the necessary auditing capabilities for
all CA operations.  This was desgined to meet the security/auditing
requirements in a number of PKI standards (see the paper for full details,
I've still got about 30cm of paper stacked up somewhere from this).  The paper
is based on implementation experience with cryptlib, you can't do anything
without generating an audit trail provided you have proper security on the TP
system (that is, a user can't inject arbitrary transactions into the system or
directly access the database files).  I tested the setup by running it inside
a debugger and resetting/halting the program at every point in a transaction,
and it recovered from each one.  It can be done, it's just a lot of work to
get right.

I should mention after having done all that work that most CAs rely on
physical and personnel security more than any automatic logging/auditing.
Take a PC and an HSM, lock it in a back room somewhere, and declare it a
secure CA.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Super-Encryption

2003-12-15 Thread Ben Laurie
[EMAIL PROTECTED] wrote:
Sender's Algorithm

SymmetricKey1 = 3DES_IV1, 3DES_Key1
Cipher1 = 3DES_Encrypt(message)
Digest = SHA1(message)
RSA_Key1 = RSA_Private_Encrypt(Digest || 3DES_Key1)
SymmetricKey2 = 3DES_IV2, 3DES_Key2
Cipher2 = 3DES_Encrypt(Cipher1)
RSA_Key2 = RSA_Public_Encrypt(3DES_Key2)
Receiver's Algorithm

3DES_Key2 = RSA_Private_Decrypt(RSA_Key2)
Cipher1 = 3DES_Decrypt(Cipher2)
Digest || 3DES_Key1 = RSA_Public_Decrypt(RSA_Key1)
message = 3DES_Decrypt(Cipher1)
Compare Digest with SHA1(message)
I don't see any value added by cipher1 - what's the point?

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PKI root signing ceremony, etc.

2003-12-15 Thread Dave Howe
Peter Gutmann wrote:
 Dave Howe [EMAIL PROTECTED] writes:
 Key management and auditing is pretty much external to the actual
 software regardless of which solution you use I would have thought.

 Not necessarily.  I looked at this in an ACSAC'2000 paper (available
 from http://www.acsac.org/2000/abstracts/18.html).  This uses a
 TP-capable database as its underlying engine, providing the necessary
 auditing capabilities for all CA operations.  This was desgined to
 meet the security/auditing requirements in a number of PKI standards
 (see the paper for full details, I've still got about 30cm of paper
 stacked up somewhere from this).  The paper is based on
 implementation experience with cryptlib, you can't do anything
 without generating an audit trail provided you have proper security
 on the TP system (that is, a user can't inject arbitrary transactions
 into the system or directly access the database files).  I tested the
 setup by running it inside a debugger and resetting/halting the
 program at every point in a transaction, and it recovered from each
 one.  It can be done, it's just a lot of work to get right.
*nods*
I meant in this context - certainly, a well designed CA package would
enforce security and audit trailing (I can easily visualise one that uses
a composite (split) access key n of m, and could probably code up such a
tool in a day or so) but Rich's original design had no audit or key
management other than that imposed externally on the (essentially
flatfile) stucture of Openssl command line tools.

 I should mention after having done all that work that most CAs rely on
 physical and personnel security more than any automatic
 logging/auditing. Take a PC and an HSM, lock it in a back room
 somewhere, and declare it a secure CA.
*nods* and that is probably as secure as any other method, and a *lot*
more secure than a safe exe running on insecure hardware.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: NEMA rotor machine offered again on ebay

2003-12-15 Thread Matt Crawford
On Dec 14, 2003, at 8:26 AM, Steve Bellovin wrote:
http://cgi.ebay.com/ws/eBayISAPI.dll? 
ViewItemitem=2210624662ssPageName=ADME:B:SS:US:1

Last time such a machine appeared, some people reported that ebay
blocked their access to the listing.  That included one person in the
U.S.
Curious.  I can access that page from my US IP address on a government  
netblock, with bidirectional DNS resolution to a .gov domain name, IF I  
use Internet Explorer, but not if I use Opera or Safari on the very  
same host.  Cookies are not the issue.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Jerrold Leichter
|  Which brings up the interesting question:  Just why are the reactions to
|  TCPA so strong?  Is it because MS - who no one wants to trust - is
|  involved?  Is it just the pervasiveness:  Not everyone has a smart card,
|  but if TCPA wins out, everyone will have this lump inside of their
|  machine.
|
| There are two differences between TCPA-hardware and a smart card.
|
| The first difference is obvious. You can plug in and later remove a smart
| card at your will, at the point of your choice. Thus, for homebanking with
| bank X, you may use a smart card, for homebaning with bank Y you
| disconnect the smart card for X and use another one, and before online
| gambling you make sure that none of your banking smart cards is connected
| to your PC. With TCPA, you have much less control over the kind of stuff
| you are using.
|
| This is quite an advantage of smart cards.
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.

Really secure mail *should* use its own smart card.  When I do banking, do
I have to remove my mail smart card?  Encryption of files on my PC should
be based on a smart card.  Do I have to pull that one out?  Does that mean
I can't look at my own records while I'm talking to my bank?  If I can only
have one smart card in my PC at a time, does that mean I can *never* cut and
paste between my own records and my on-line bank statement?  To access my
files and my employer's email system, do I have to have to trust a single
smart card to hold both sets of secrets?

I just don't see this whole direction of evolution as being viable.  Oh,
we'll pass through that stage - and we'll see products that let you connect
multiple smart cards at once, each guaranteed secure from the others.  But
that kind of add-on is unlikely to really *be* secure.

Ultimately, to be useful a trusted kernel has to be multi-purpose, for exactly
the same reason we want a general-purpose PC, not a whole bunch of fixed-
function appliances.  Whether this multi-purpose kernel will be inside the PC,
or a separate unit I can unplug and take with me, is a separate issue. Give
the current model for PC's, a separate key is probably a better approach.
However, there are already experiments with PC in my pocket designs:  A
small box with the CPU, memory, and disk, which can be connect to a small
screen to replace a palmtop, or into a unit with a big screen, a keyboard,
etc., to become my desktop.  Since that small box would have all my data, it
might make sense for it to have the trusted kernel.  (Of course, I probably
want *some* part to be separate to render the box useless is stolen.)

| The second point is perhaps less obvious, but may be more important.
| Usually, *your* PC hard- and software is supposed to to protect *your*
| assets and satisfy *your* security requirements. The trusted hardware
| add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
| the *outsider's* security needs -- from you.
|
| A TCPA-enhanced PC is thus the servant of two masters -- your servant
| and the outsider's. Since your hardware connects to the outsider directly,
| you can never be sure whether it works *against* you by giving the
| outsider more information about you than it should (from your point if
| view).
|
| There is nothing wrong with the idea of a trusted kernel, but trusted
| means that some entity is supposed to trust the kernel (what else?). If
| two entities, who do not completely trust each other, are supposed to both
| trust such a kernel, something very very fishy is going on.
Why?  If I'm going to use a time-shared machine, I have to trust that the
OS will keep me protected from other users of the machine.  All the other
users have the same demands.  The owner of the machine has similar demands.

The same goes for any shared resource.  A trusted kernel should provide some
isolation guarantees among contexts.  These guarantees should be independent
of the detailed nature of the contexts.  I think we understand pretty well
what the *form* of these guarantees should be.  We do have problems actually
implementing such guarantees in a trustworthy fashion, however.

Part of the issue with TCPA is that the providers of the kernel that we are
all supposed to trust blindly are also going to be among those who will use it
heavily.  Given who those producers are, that level of trust is unjustifiable.

However, suppose that TCPA (or something like it) were implemented entirely by
independent third parties, using open techniques, and that they managed to
produce both a set of definitions of isolation, and an implementation, that
were widely seen to correctly specify, embody, and enforce strict protection.
How many of the criticisms of TCPA would that mute?  Some:  Given open
standards, a Linux TCPA-based computing platform could be produced.
Microsoft's access the the trusted kernel would be exactly the same as
everyone else's; there would be no