About that Mighty Fortress... What's it look like?

2010-07-31 Thread Ray Dillinger

Assume, contra facto, that in some future iteration of PKI, it 
works, and works very well.  

What the heck does it look like?

At a guess  Anybody can create a key (or key pair).  They 
get one clearly marked private, which they're supposed to keep, 
and one clearly marked public, which they can give out to anybody
they want to correspond with. 

Gaurantors and certifying authorities can endorse the public key
for specific purposes relating to their particular application.
Your landlord can endorse your keycard to allow you to get into 
the apartment you rent, the state government can endorse your 
key when you get a contractor's license or private investigator's 
license or register a business to sell to consumers and pay taxes,
etc.  

There are no certifying agencies other than interested parties 
and people who issue licenses/guarantees for specific reasons. 

You can use your private key to endorse somebody else's key 
to allow them to do some particular thing (you have to write a 
short note that says what) that involves you, or check someone 
else's key to see if it's one that you've endorsed.  If you've 
endorsed it, you get back the short note that you wrote, telling 
you what purpose you've endorsed it for. 

Anybody who's endorsed a key can prove that they've endorsed it 
by publishing their endorsement.  You can read and verify public 
endorsements using the public keys of the involved parties.

And you can revoke your endorsement of any particular key, at any
time, for any reason.  The action won't affect other endorsements
of the same key, nor other endorsements you've made.

Finally, you can use your private key to prepare a revocation, 
which can be held indefinitely in some backup storage, insurance
database, or safe-deposit box. If you ever lose your private key, 
you send the revocation and everybody who has endorsed your 
public key gets notified that it's no good anymore.

I think this model is simple enough to be understood by 
ordinary people.  It's also clear enough in its semantics to 
be implemented in a straightforward way.  Is it applicable 
to the things we want to use a PKI for?

Bear


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Is this the first ever practically-deployed use of a threshold scheme?

2010-07-31 Thread Peter Gutmann
Apparently the DNS root key is protected by what sounds like a five-of-seven
threshold scheme, but the description is a bit unclear.  Does anyone know
more?

(Oh, and for people who want to quibble over practically-deployed, I'm not
 aware of any real usage of threshold schemes for anything, at best you have
 combine-two-key-components (usually via XOR), but no serious use of real n-
 of-m that I've heard of.  Mind you, one single use doesn't necessarily count
 as practically deployed either).

Peter (who has two more Perry-DoS-ing conversation-starter posts to make, but
   will leave them for awhile now :-).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: About that Mighty Fortress... What's it look like?

2010-07-31 Thread Perry E. Metzger
On Fri, 30 Jul 2010 19:40:49 -0700 Ray Dillinger b...@sonic.net
wrote:
 Assume, contra facto, that in some future iteration of PKI, it
 works, and works very well.

 What the heck does it look like?

 At a guess  Anybody can create a key (or key pair).  They 
 get one clearly marked private, which they're supposed to keep, 
 and one clearly marked public, which they can give out to anybody
 they want to correspond with.

 Gaurantors and certifying authorities can endorse the public key
 for specific purposes relating to their particular application.
 Your landlord can endorse your keycard to allow you to get into 
 the apartment you rent, the state government can endorse your 
 key when you get a contractor's license or private investigator's 
 license or register a business to sell to consumers and pay taxes,
 etc.

You are still following the same model that has failed over and over
and over again. Endorsing keys is the same we have no internet, so
we rely on having big books to tell us whether a person's credit card
was stolen model.

There is no rational reason at all that someone should endorse a key
when it is possible to simply do a real time check for
authorization. There is no reason to sign a key when you can just
check if the key is in a database.

 And you can revoke your endorsement of any particular key, at any
 time, for any reason.

How?

If you have to do a real time check for every use anyway, the
signature on the key is unnecessary as you can just ask is this user
authorized. If you can't do a real time check, then the system fails
anyway. Either way, there is no logical or architectural reason for
signatures on keys.

 I think this model is simple enough to be understood by ordinary
 people.

I challenge you to explain any such model to my mother
successfully. Indeed, I think any model that needs to be explained to
anyone has already failed.

A good model is one in which if you screw up, nothing bad can
happen. For example, if you go to the phisherman's web site instead of
your bank's, nothing you can possibly do will endanger your
security. The worst that can happen is you end up frustrated and
puzzled, but you never can leak information to the phisherman. It may
be impossible to achieve this with complete perfection, but if, for
example, it would be necessary for someone trying to steal your
credentials to social engineer you into get actual physical access to
a smart token or some such for a while to get at your bank account,
things are now good enough for most purposes.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: init.d/urandom : saving random-seed

2010-07-31 Thread John Denker
Hi Henrique --

This is to answer the excellent questions you asked at
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=587665#81

Since that bug is now closed (as it should be), and since these
questions are only tangentially related to that bug anyway, I am 
emailing you directly.  Feel free to forward this as appropriate.

 1. How much data of unknown quality can we feed the random pool at boot,
before it causes damage (i.e. what is the threshold where we violate the
you are not goint to be any worse than you were before rule) ?

There is no possibility of making things worse.  It is like shuffling
a deck of cards:  If it is already shuffled, shuffling it some more
is provably harmless.

This property is a core design requirement of the PRNG, and has been
for ages.

Note that writing to /dev/random requires no privileges, which makes
sense in light of this property.

 2. How dangerous it is to feed the pool with stale seed data in the next
boot (i.e. in a failure mode where we do not regenerate the seed file) ?

As is so often the case in the security / crypto business, the answer
depends on your threat model.  The demands placed on a PRNG vary wildly
from one application to another.  Interesting use cases include:
 a) low-grade randomness: For non-adversarial applications such as Monte
  Carlo integration of a physics problem, almost any PRNG will do.  Even
  a LFSR would do, even though a LFSR can easily be cryptanalyzed.  The
  point is that nobody is going to bother attempting the cryptanalysis.
 b) current /dev/urandom: The consensus among experts is that /dev/urandom
  is routinely used in ways for which it is not suited.  See my previous
  email, or refer to
http://www.pinkas.net/PAPERS/gpr06.pdf
 c) high-grade randomness: For high-stakes adversarial applications, 
  including crypto and gaming, you really ought to use a TRNG not a
  PRNG.  In this case, no state is required and no seed is required, 
  so the question of how to preserve the state across reboots does not
  arise.  Constructive suggestion:  for high-grade applications, use
  Turbid:http://www.av8n.com/turbid/

To repeat:  For serious applications, I wouldn't trust /dev/urandom at all, 
and details of how it gets seeded are mostly just re-arranging the deck 
chairs on the Titanic.  The question is not whether this-or-that seed
preservation policy is safe.  The most you can ask of a seed preservation
policy is that the PRNG after reboot will be _not worse_ than it was before.

Now, to answer the question:  A random-seed file should never be reused.
Never ever.

Reusing the random-seed file makes the PRNG very much worse than it would
otherwise be.  By way of illustration, suppose you are using the computer
to help you play battleship or go fish against a ten-year-old opponent.
If you use the same 'random' numbers after every reboot, the opponent is
going to notice.  You are going to lose.  In more-demanding situations,
against an opponent with more skill and more motivation, you are going to
lose even more miserably.

 3. What is the optimal size of the seed data based on the pool size ?

While we are on the subject, let me point out a bug in all recent versions
of init.d/urandom (including the current sid version as included in
initscripts_2.88dsf-11_amd64.deb) : 
  The poolsize as reported by /proc/sys/kernel/random/poolsize has units 
  of _bits_ whereas the random-seed filesize has units of _bytes_.  It is 
  a bug to directly compare these numbers, or to set one of them based on 
  the other.  There needs to be a conversion factor, perhaps something like
  this:
  (( DD_BYTES = ($POOLSIZE + 7)/8 ))

Now, to answer the question:  It suffices to make the random-seed file 
contain the same number of bits as the PRNG's internal state vector 
(poolsize).  Call this the BBJR size (baby-bear-just-right).

On the other hand, it is harmless to make the random-seed file larger than
it needs to be.

In contrast, using the size of the random-seed file to reset the PRNG's 
poolsize is a bad idea, especially if the random-seed file is (intentionally 
or otherwise) bigger or smaller than the BBJR size.

Semi-constructive pseudo-suggestion:  *IF* we want to keep track of the 
poolsize, it might make more sense to store it separately and explicitly, 
in its own file.  This would make the code simpler and more rational.

On the other hand, I'm not sure why there is *any* code in init.d/urandom
for saving or setting the poolsize.  Chez moi /proc/sys/kernel/random/poolsize
is read-only.  Indeed I would expect it to be read-only, since changing 
it would have drastic consequences for the internal operation of the PRNG, 
and looking at random.c I don't see any code to handle such a change.

So the real suggestion is to eliminate from the Linux init.d/urandom 
all of the code that tries to ascertain the size of the random-seed 
file and/or tries to set the poolsize.  (For non-Linux systems, the
situation may or may not be 

Five Theses on Security Protocols

2010-07-31 Thread Perry E. Metzger
Inspired by recent discussion, these are my theses, which I hereby
nail upon the virtual church door:

1 If you can do an online check for the validity of a key, there is no
  need for a long-lived signed certificate, since you could simply ask
  a database in real time whether the holder of the key is authorized
  to perform some action. The signed certificate is completely
  superfluous.

  If you can't do an online check, you have no practical form of
  revocation, so a long-lived signed certificate is unacceptable
  anyway.

2 A third party attestation, e.g. any certificate issued by any modern
  CA, is worth exactly as much as the maximum liability of the third
  party for mistakes. If the third party has no liability for
  mistakes, the certification is worth exactly nothing. All commercial
  CAs disclaim all liability.

  An organization needs to authenticate and authorize its own users;
  it cannot ask some other organization with no actual liability to
  perform this function on its behalf. A bank has to know its own
  customers, the customers have to know their own bank. A company
  needs to know on its own that someone is allowed to reboot a machine
  or access a database.

3 Any security system that demands that users be educated,
  i.e. which requires that users make complicated security decisions
  during the course of routine work, is doomed to fail.

  For example, any system which requires that users actively make sure
  throughout a transaction that they are giving their credentials to
  the correct counterparty and not to a thief who could reuse them
  cannot be relied on.

  A perfect system is one in which no user can perform an action that
  gives away their own credentials, and in which no user can
  authorizes an action without their participation and knowledge. No
  system can be perfect, but that is the ideal to be sought after.

4 As a partial corollary to 3, but which requires saying on its own:
  If false alarms are routine, all alarms, including real ones, will
  be ignored. Any security system that produces warnings that need to
  be routinely ignored during the course of everyday work, and which
  can then be ignored by simple user action, has trained its users to be
  victims.

  For example, the failure of a cryptographic authentication check
  should be rare, and should nearly always actually mean that
  something bad has happened, like an attempt to compromise security,
  and should never, ever, ever result in a user being told oh, ignore
  that warning, and should not even provide a simple UI that permits
  the warning to be ignored should someone advise the user to do so.

  If a system produces too many false alarms to permit routine work to
  happen without an ignore warning button, the system is worthless
  anyway.

5 Also related to 3, but important in its own right: to quote Ian
  Grigg:

*** There should be one mode, and it should be secure. ***

  There must not be a confusing combination of secure and insecure
  modes, requiring the user to actively pay attention to whether the
  system is secure, and to make constant active configuration choices
  to enforce security. There should be only one, secure mode.

  The more knobs a system has, the less secure it is. It is trivial to
  design a system sufficiently complicated that even experts, let
  alone naive users, cannot figure out what the configuration
  means. The best systems should have virtually no knobs at all.

  In the real world, bugs will be discovered in protocols, hash
  functions and crypto algorithms will be broken, etc., and it will be
  necessary to design protocols so that, subject to avoiding downgrade
  attacks, newer and more secure modes can and will be used as they
  are deployed to fix such problems. Even then, however, the user
  should not have to make a decision to use the newer more secure mode,
  it should simply happen.


Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-31 Thread Bill Stewart

At 07:16 AM 7/28/2010, Ben Laurie wrote:

SSH does appear to have got away without revocation, though the nature
of the system is s.t. if I really wanted to revoke I could almost
always contact the users and tell them in person. This doesn't scale
very well to SSL-style systems.


Unfortunately, there _are_ ways that it can scale adequately.
Bank of America has ~50 million customers,
so J. Random Spammer sends out 500 million emails saying
Bank of America is updating our security procedures,
please click on the following link to update your browser.
It's more efficient for BofA to send out the message themselves,
only to actual subscribers, with the actual keys,
helping to train them to accept phishing mail in the process,
but apparently even doing it the hard way scales well enough for some 
people to make money.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Anne Lynn Wheeler

corollary to security proportional to risk is parameterized risk management 
... where variety of technologies with varying integrity levels can co-exist within the same 
infrastructure/framework. transactions exceeding particularly technology risk/integrity threshold 
may still be approved given various compensating processes are invoked (allows for multi-decade 
infrastructure operation w/o traumatic dislocation moving from technology to technology as well as 
multi-technology co-existence).

in the past I had brought this up to the people defining V3 extensions ... 
early in their process ... and they offered to let me do the work defining a V3 
integrity level field. My response was why bother with stale, static 
information when real valued operations would use much more capable dynamic, 
realtime, online process.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread John Levine
Nice theses.  I'm looking forward to the other 94.  The first one is a
nice summary of why DKIM might succeed in e-mail security where S/MIME
failed.  (Succeed as in, people actually use it.)

2 A third party attestation, e.g. any certificate issued by any modern
  CA, is worth exactly as much as the maximum liability of the third
  party for mistakes. If the third party has no liability for
  mistakes, the certification is worth exactly nothing. All commercial
  CAs disclaim all liability.

Geotrust, to pick the one I use, has a warranty of $10K on their cheap
certs and $150K on their green bar certs.  Scroll down to the bottom
of this page where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant
mostly that they won't screw up, e.g., leak your private key, and
they'll only pay to the party that bought the certificate, not third
parties that might have relied on it.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Peter Gutmann
Perry E. Metzger pe...@piermont.com writes:

Inspired by recent discussion, these are my theses, which I hereby nail upon
the virtual church door:

Are we allowed to play peanut gallery for this?

1 If you can do an online check for the validity of a key, there is no
  need for a long-lived signed certificate, since you could simply ask
  a database in real time whether the holder of the key is authorized
  to perform some action.

Based on the ongoing discussion I've now had, both on-list and off, about
blacklist-based key validity checking [0], I would like to propose an
addition:

  The checking should follow the credit-card authorised/declined model, and
  not be based on blacklists (a.k.a. the second dumbest idea in computer
  security, see
  http://www.ranum.com/security/computer_security/editorials/dumb/).

(Oh yes, for a laugh, have a look at the X.509 approach to doing this.  It's
eighty-seven pages long, and that's not including the large number of other
RFCs that it includes by reference: http://tools.ietf.org/html/rfc5055).

 The signed certificate is completely superfluous.

This is, I suspect, the reason for the vehement opposition to any kind of
credit-card style validity checking of keys, if you were to introduce it, it
would make both certificates and the entities that issue them superfluous.

Peter.

[0] It's kinda scary that it's taking this much debate to try and convince
people that blacklists are not a valid means of dealing with arbitrarily
delegatable capabilities.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-07-31 Thread Jakob Schlyter
On 31 jul 2010, at 08.44, Peter Gutmann wrote:

 Apparently the DNS root key is protected by what sounds like a five-of-seven
 threshold scheme, but the description is a bit unclear.  Does anyone know
 more?

The DNS root key is stored in HSMs. The key backups (maintained by ICANN) are 
encrypted with a storage master key (SMK), created inside the HSM and then 
split among 7 people (aka Recovery Key Share Holders). To recover the SMK in 
case of all 4 HSMs going bad, 5 of 7 key shares are required. 
(https://www.iana.org/dnssec/icann-dps.txt section 5.2.4)

According to the FIPS 140-2 Security Policy of the HSM, an AEP Keyper, the 
M-of-N key split is done using a La Grange interpolating Polynomial.


I'd be happy to answer any additional questions,

jakob (part of the team who designed and implemented this)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Chris Palmer
Usability engineering requires empathy. Isn't it interesting that nerds
built themselves a system, SSH, that mostly adheres to Perry's theses? We
nerds have empathy for ourselves. But when it comes to a system for other
people, we suddenly lose all empathy and design a system that ignores
Perry's theses.

(In an alternative scenario, given the history of X.509, we can imagine that
PKI's woes are due not to nerd un-empathy, but to
government/military/hierarchy-lover un-empathy. Even in that scenario, nerd
cooperation is necessary.)

The irony is, normal people and nerds need systems with the same properties,
for the same reasons.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com