example: secure computing kernel needed

2003-12-11 Thread John S. Denker
Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
Now, in contrast, here is an application that begs for
a secure computing kernel, but has nothing to do with
microsoft and nothing to do with copyrights.
Scenario:  You are teaching chemistry in a non-anglophone
country.  You are giving an exam to see how well the
students know the periodic table.
 -- You want to allow students to use their TI-83 calculators
for *calculating* things.
 -- You want to allow the language-localization package.
 -- You want to disallow the app that stores the entire
periodic table, and all other apps not explicitly
approved.
The hardware manufacturer (TI) offers a little program
that purports to address this problem
  http://education.ti.com/us/product/apps/83p/testguard.html
but it appears to be entirely non-cryptologic and therefore
easily spoofed.
I leave it as an exercise for the reader to design a
calculator with a secure kernel that is capable of
certifying something to the effect that no apps and
no data tables (except for ones with the following
hashes) have been accessible during the last N hours.
Note that I am *not* proposing reducing the functionality
of the calculator in any way.  Rather I am proposing a
purely additional capability, namely the just-mentioned
certification capability.
I hope this example will advance the discussion of secure
computing.  Like almost any powerful technology, we need
to discuss
 -- the technology *and*
 -- the uses to which it will be put
... but we should not confuse the two.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gresham's Law?

2003-11-21 Thread John S. Denker
On 11/19/2003 07:51 PM, Jon Callas wrote:
This is indeed the only case I know of where government has given 
protection and preference to inferior systems over superior ones.
It's not hard to discover other cases.

At the philosophical level, one could argue that
protecting the weak is one of the most fundamental
raisons d'etre for a government.
If you don't like the effects of Gresham's law,
replacing it with the law of the jungle isn't
really an improvement.  Practicality lies in
the vast gray area in the middle.
As a specific example, consider the legal status
of the lock on my door.  Any burglar with even
rudimentary skills could pick the lock.  One with
even less skill could break the fancy glass
beside the door.  More-secure locks and more-secure
doors are readily available.  Yet the law takes
notice of the lock.  If I don't have a door, if
you waltz in it might be trespass or it might be
no offence at all.  But if you pick or smash your
way past a locked door, without permission or
some very special reason, it's likely to be
felony breaking and entering.
Maybe you think that the BE laws should only
apply to state-of-the-art high security vaults.
I don't.  I think the existing lock performs a
useful symbolic role:  it puts you on notice
that you don't belong there.  You can't get
past it by accident.  The law takes over from
there.
 . in my talks and testimony about the DMCA.
 I referred to Gresham's Law as it applies to security. I also have
 called the DMCA The Snake-Oil Protection Act.
A friend of mine once told me:  Never support a
strong argument with a weak one.
There exist strong arguments why DMCA is a bad
law.  Boldly asserting that the government has
never heretofore built laws around imperfect
technology is not going to impress any lawmakers.
Also, if you're going to argue against something,
it pays to know where the other side is coming
from.  In the areas where cypto works well, it
works so extraordinarily well that bad systems
can, over time, be drowned in their own snake-oil
and forgotten.  If protecting substandard crypto
were the only issue, I doubt anybody would have
gone to the trouble of passing a law.
The point of the law is elsewhere:  the proponents
are worried about what happens in the thousand and
one cases where strong crypto doesn't solve the
problem.
To repeat:  If you want to make an argument against
the other side, it's a bad strategy to start by
misjuding what they're arguing for.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread John S. Denker
On 10/22/2003 04:33 PM, Ian Grigg wrote:

 The frequency of MITM attacks is very low, in the sense that there
 are few or no reported occurrences.
We have a disagreement about the facts on this point.
See below for details.
 This makes it a challenge to
 respond to in any measured way.
We have a disagreement about the philosophy of how to
measure things.  One should not design a bridge according
to a simple measurement of the amount of cross-river
traffic in the absence of a bridge.  One should not approve
a launch based on the observed fact that previous instances
of O-ring failures were non-fatal.
Designers in general, and cryptographers in particular,
ought to be proactive.
But this philosophy discussion is a digression, because
we have immediate practical issues to deal with.
 Nobody doubts that it can occur, and that it *can* occur in practice.
 It is whether it *does* occur that is where the problem lies.
According to the definitions I find useful, MITM is
basically a double impersonation.  For example,
Mallory impersonates PayPal so as to get me to
divulge my credit-card details, and then impersonates
me so as to induce my bank to give him my money.
This threat is entirely within my threat model.  There
is nothing hypothetical about this threat.  I get 211,000
hits from
  http://www.google.com/search?q=ID-theft
SSL is distinctly less than 100% effective at defending
against this threat.  It is one finger in a dike with
multiple leaks.  Client certs arguably provide one
additional finger ... but still multiple leaks remain.
==

The expert reader may have noticed that there are
other elements to the threat scenario I outlined.
For instance, I interact with Mallory for one seemingly
trivial transaction, and then he turns around and
engages in numerous and/or large-scale transactions.
But this just means we have more than one problem.
A good system would be robust against all forms
of impersonation (including MITM) *and* would be
robust against replays *and* would ensure that
trivial things and large-scale things could not
easily be confused.  Et cetera.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: WYTM?

2003-10-17 Thread John S. Denker
On 10/16/2003 07:19 PM, David Honig wrote:

 it would make sense for the original vendor website (eg Palm)
 to have signed the MITM site's cert (palmorder.modusmedia.com),
 not for Verisign to do so.  Even better, for Mastercard to have signed
 both Palm and palmorder.modusmedia.com as well.  And Mastercard to
 have printed its key's signature in my monthly paper bill.
Bravo.  Those are golden words.

Let me add my few coppers:

1) This makes contact with a previous thread wherein
the point was made that people often unwisely talk
about identities when they should be talking about
credentials aka capabilities.
I really don't care about the identity of the
order-taking agent (e.g. palmorder.modusmedia.com).
What I want to do is establish the *credentials*
of this *session*.  I want a session with the
certified capability to bind palm.com to a
contract, and the certified capability to handle
my credit-card details properly.
2) We see that threat models (as mentioned
in the Subject: line of this thread), while
an absolutely vital part of the story, are
not the whole story.  One always needs a
push-pull approach, documenting the good
things that are supposed to happen *and* the
bad things that are supposed to not happen
(i.e. threats).
3) To the extent that SSL focuses on IDs rather
than capabilities, IMHO the underlying model has
room for improvement.
4a) This raises some user-interface issues.  The
typical user is not a world-class cryptographer
and may not have a clear idea just what ensemble
of credentials a given session ought to have.
This is not a criticism of credentials;  the user
doesn't know what ID the session ought to have
under the current system, as illustrated by the
Palm example.  The point is that if we want
something better than what we have now, we have
a lot of work to do.
4b) As a half-baked thought:  One informal intuitive
notion that users have is that if a session displays
the MasterCard *logo* it must be authorized by
MasterCard.  This notion is enforceable by law
in the long run.  Can we make it enforceable
cryptographically in real time?  Perhaps the CAs
should pay attention not so much to signing domain
names (with some supposed responsibility to refrain
from signing abusively misspelled names e.g.
pa1m.com) but rather more to signing logos (with
some responsibility to not sign bogus ones).
Then the browser (or other user interface) should
to verify -- automatically -- that a session that
wishes to display certain logos can prove that
it is authorized to do so.  If the logos check
out, they should be displayed in some distinctive
way so that a cheap facsimile of a logo won't be
mistaken for a cryptologically verified logo.
Even if you don't like my half-baked proposal (4b)
I hope we can all agree that the current ID-based
system has room for improvement.
=

Tangentially-related point about credentials:

In a previous thread the point was made that
anonymous or pseudonymous credentials can only
say positive things.  That is, I cannot discredit
you by giving you a discredential.  You'll just
throw it away.  If I somehow discredit your
pseudonym, you'll just choose another and start
over.
This problem can be alleviated to some extent
if you can post a fiduciary bond.  Then if you
do something bad, I can demand compensation from
the agency that issued your bond.  If this
happens a lot, they may revoke your bond.  That
is, you can be discredited by losing a credential.
This means I can do business with you without
knowing your name or how to find you.  I just
need to trust the agency that issued your bond.
The agency presumably needs to know a lot about
you, but I don't.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cryptographic ergodic sequence generators?

2003-10-15 Thread John S. Denker
Perry E. Metzger wrote:

I've noted to others on this before that for an application like
the IP fragmentation id, it might be even better if no repeats
occurred in any block of 2^31 (n being 32) but the sequence did not
repeat itself (or at least could be harmlessly reseeded at very very
long intervals).
I assume the point of the reseeding is to make
the ID-values more unpredictable.
On 09/07/2003 11:18 AM, David Wagner wrote:

 Let E_k(.) be a secure block cipher on 31 bits with key k.
 Pick an unending sequence of keys k0, k1, k2, ... for E.

 Then your desired sequence can be constructed by
   E_k0(0), E_k0(1), E_k0(2), ..., E_k0(2^31 - 1),
   2^31 + E_k1(0), 2^31 + E_k1(1), ..., 2^31 + E_k1(2^31 - 1),
   E_k2(0), E_k2(1), E_k2(2), ..., E_k2(2^31 - 1),
   2^31 + E_k3(0), 2^31 + E_k3(1), ..., 2^31 + E_k3(2^31 - 1),
Again if we assume the point is to make the values
unpredictable (not just ergodic), then there is
room for improvement.
To see what I mean, divide the values into generations
G=0,1,2,3... where each row in the tableau above is
one generation.
The problem is that at the end of each generation,
the values become highly predictable, à la Blackjack.
David's proposal can be improved by the method used
by Blackjack dealers:  shuffle early.  In each
generation, let the argument of E_kG(.) max out at
some fraction (f) of 2^(n-1).  A limit of f=1/2 is
the obvious choice, although other f values e.g. f=2/3
work nicely too.  The domain and range of E_kG(.) are
still unrestricted (n-1)-bit numbers.
This gives us the following properties
 -- Guaranteed no repeats within the last f*2^(n-1) IDs.
 -- Probably no repeats in an even longer time.
 -- Even if the opponent is a hard-working Blackjack
player, he has only one chance in (1-f)*2^(n-1)
of guessing the next value.  To put this number in
context, note that the opposition has one chance
in 2^(n-1) of guessing the next value without any
work at all, just by random guessing.
Setting f too near zero degrades the no-repeat guarantee.
Setting f too near unity leads to the Blackjack problem.
Setting f somewhere in the middle should be just fine.
=

Discussion of conceivable refinements:

A proposal that keeps coming up is to take the values
generated above and run them through an additional
encryption stage, with a key that is randomly chosen
at start-up time (then held fixed for all generations).
The domain and range of this post-processing stage
are n-bit numbers.
This makes the output seem more elegant, in that we
have unpredictability spread over the whole n-bit word,
rather than having n-1 hard-to-predict bits plus one
almost-constant bit.
Define the phase to be P := (G mod 2).

The opponent will have to collect roughly 2^n data
points before being able to figure out which values
belong to which phase, so initially his guess rate
will be closer to one in 2^n, which is a twofold
improvement ... temporarily.
This temporary improvement is not permanent, if we
allow the opponent to have on the order of 2^n
memory.  He will in the long run learn which values
belong to which phase.  I see no way to prevent this.
So as far as I can tell, the proposed post-processing
is more in the nature of a temporary annoyance to the
opposition, and should not be considered industrial-strength
cryptography.
Perhaps more to the point, if we are going to allow
the opposition to have 2^n memory, it would be only
fair to allow the good guys to have 2^n memory.  In
that case, all the schemes discussed above pale in
comparison to something I suggested previously, namely
generating an ID absolutely randomly, but using a
look-up table to check if it has been used recently,
in which case we veto it and generate another.  If
you can afford the look-up table, these randomly
generated IDs have the maximum possible unpredictability.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


anonymity +- credentials

2003-10-03 Thread John S. Denker
On 10/03/2003 01:26 PM, R. A. Hettinga wrote:

 It seems to me that perfect pseudonymity *is* anonymity.
They're not quite the same thing; see below.

 Frankly, without the ability to monitor reputation, you don't have
 ways of controlling things like transactions, for instance. It's just
 that people are still mystified by the concept of biometric
 is-a-person identity, which strong cryptography can completely
 divorce from reputation.
We agree that identification is *not* the issue, and
that lots of people are confused about this.
I'm not sure reputation is exactly the right concept
either;  the notion of credentials is sometimes better,
and the operating-systems folks speak of capabilities.
There are three main possibilities:
 -- named (unique static handle)
 -- pseudonymous (dynamic handles)
 -- anonymous (no handle all)
Sometimes pseudonyms are more convenient than having no
handle at all.  It saves you the trouble of having to
re-validate your credentials at every micro-step of the
process (whatever the process may be).
Oftentimes pseydonyms are vastly preferable to a static
name, because you can cobble up a new one whenever you
like, subject to the cost of (re)establishing your
credentials from scratch.
The idea of linking (bidirectionally) all credentials
with the static is-a-person identity is a truly terrible
idea.  It dramatically *reduces* security.  Suppose Jane
Doe happens to have the following credentials
 -- Old enough to buy cigarettes.
 -- Has credit-card limit  $300.00
 -- Has credit-card limit  $3000.00
 -- Has car-driving privileges.
 -- Has commercial pilot privileges.
 -- Holds US citizenship.
 -- Holds 'secret' clearance.
When Jane walks into a seedy bar, someone can reasonably
ask to verify her old-enough credential.  She might
not want this query to reveal her exact age, and she
might *really* not want it to reveal her home address (as
many forms of ID do), and she might *really* *really*
not want it to reveal all her other credentials and
capabilities.
*) There is an exploding epidemic of ID theft.
That is a sure sign that people keep confusing
capability -- identity and identity -- capabilities.
*) There are those who want us to have a national ID-checking
infrastructure as soon as possible.  They think this will
increase security.  I think it is a giant step in the wrong
direction.
*) Reputation (based on a string of past interactions) is
one way, but not the only way, to create a credential that
has some level of trust.
=

We need a practical system for anonymous/pseudonymous
credentials.  Can somebody tell us, what's the state of
the art?  What's currently deployed?  What's on the
drawing boards?
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread John S. Denker
On 10/01/2003 11:22 AM, Don Davis wrote:

 there's another rationale my clients often give for
 wanting a new security system, instead of the off-
 the-shelf standbys:  IPSec, SSL, Kerberos, and the
 XML security specs are seen as too heavyweight for
 some applications.  the developer doesn't want to
 shoehorn these systems' bulk and extra flexibility
 into their applications, because most applications
 don't need most of the flexibility offered by these
 systems.
Is that a rationale, or an irrationale?

According to 'ps', an all-up ssh system is less
than 3 megabytes (sshd, ssh-agent, and the ssh
client).  At current memory prices, your clients
would save less than $1.50 per system even if
their custom software could reduce this bulk
to zero.
With the cost of writing custom software being
what it is, they would need to sell quite a
large number of systems before de-bulking began
to pay off.  And that's before accounting for
the cost of security risks.
 some shops experiment with the idea of using only
 part of OpenSSL, but stripping unused stuff out of
 each new release of OpenSSL is a maintenance hassle.
1) Well, they could just ignore the new release
and stick with the old version.  Or, if they think
the new features are desirable, then they ought
to compare the cost of re-stripping against the
cost of implementing the new desirable features
in the custom code.
I'm just trying to inject some balance into the
balance sheet.
2) If you do a good job stripping the code, you
could ask the maintainers to put your #ifdefs into
the mainline version.  Then you have no maintenance
hassle at all.
 they want their crypto clothing
 to fit well, but what's available off-the-rack is
 a choice between frumpy
Aha.  They want to make a fashion statement.

That at least is semi-understandable.  People do
expensive and risky things all the time in the name
of fashion.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


lopsided Feistel (was: cryptographic ergodic sequence generators)

2003-09-06 Thread John S. Denker
On 09/06/2003 02:33 PM, Tim Dierks wrote:
 I'm sure that it would be possible to design a Feistel-based block
 cipher with variable block size, supporting some range of even values
 of n.
There's no need to exclude odd n.

I know the typical superficial textbook describes
the Feistel trick in terms of splitting each block
exactly in half, but if you understand the trick
you see that it works just fine for other splits.
It doesn't need to be anywhere near half.  It
doesn't even need to be a two-way split.
You could process a 21-bit word as:
 -- three groups of seven, or
 -- seven groups of three, or
 -- one group of twelve and one group of nine, or
 -- whatever.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: traffic analysis

2003-08-29 Thread John S. Denker
On 08/28/2003 04:26 PM, David Wagner wrote:

 Are you sure you understood the attack?
Are you sure you read my original note?

 The attack assumes that communications links are insecure.

I explicitly hypothesized that the links were
encrypted. The cryptotext may be observed and
its timing may be tampered with, but I assumed
the attackers could not cut through the
encryption to get at the plaintext.
 The *transmission* from Alice may adhere to a fixed schedule, but
 that doesn't prevent the attacker from introducing delays into the
 packets after transmission.
Fine. So far the timing doesn't tell us anything
about the behavior of Alice, just the behavior
of the attacker.
 For instance, suppose I want to find out who is viewing my web site.
 I have a hunch that Alice is visiting my web site right this instant,
  and I want to test that hunch.  I delay Alice's outgoing packets,
 and I check whether the incoming traffic to my web contains matching
 delays.
I explicitly said that if some endpoints are not
secure, Alice suffers some loss of privacy when
communicating with such an endpoint.  Here DAW is
playing the role of attacker, and is mounting an
attack that combined traffic analysis with much
more powerful techniques; he is assuming he owns
the endpoint or otherwise can see through the
crypto into the plaintext.
Let us not confuse traffic analysis issues with
anonymity issues.
I explicitly said that traffic analysis was not the
only threat to be considered.
To say it another way:  The US ambassador in Moscow
is not trying to remain anonymous from the US
ambassador in Riyadh;  they just don't want the
opposition to know if/when/how-often they talk.
=

I described a certain model based on certain hypotheses.

Many people have responded with attacks on different
models, based on different hypotheses.  Some have
frankly admitted contradicting me without having
bothered to read what I wrote.  I'm not going to
respond to any more of these ... except to say that
they do not, as far as I can see, detract in any
way from the points I was making.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: traffic analysis

2003-08-28 Thread John S. Denker
A couple of people wrote in to say that my remarks
about defending against traffic analysis are not
true.
As 'proof' they cite
   http://www.cypherspace.org/adam/pubs/traffic.pdf
which proves nothing of the sort.
The conclusion of that paper correctly summarizes
the body of the paper;  it says they examined and
compared a few designs, and that they pose the
question as to whether other interesting protocols
exist, with better trade-offs, that would be practical
to implement and deploy.
Posing the question is not the same as proving that
the answer is negative.
I am also reminded of the proverb:
 Persons saying it cannot be done should
 not interfere with persons doing it.
The solution I outlined is modelled after
procedures that governments have used for decades
to defend against traffic analysis threats to
their embassies and overseas military bases.
More specifically, anybody who thinks the scheme
I described is vulnerable to a timing attack isn't
paying attention.  I addressed this point several
times in my original note.  All transmissions
adhere to a schedule -- independent of the amount,
timing, meaning, and other characteristics of the
payload.
And this does not require wide-area synchronization.
If incoming packets are delayed or lost, outgoing
packets may have to include nulls (i.e. cover traffic).
This needn't make inefficient use of communication
resources.  The case of point-to-point links to a
single hub is particularly easy to analyze:  cover
traffic is sent when and only when the link would
otherwise be idle.
Similarly it needn't make inefficient use of
encryption/decryption resources.  This list is
devoted to cryptography, so I assume people can
afford 1 E and 1 D per message; the scheme I
outlined requires 2 E and 2 D per message, which
seems like a cheap price to pay if you need
protection against traffic analysis.  On top of
that, the processor doing the crypto will run
hotter because typical traffic will be identical
to peak traffic, but this also seems pretty cheap.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: authentication and ESP

2003-06-22 Thread John S. Denker
On 06/19/2003 01:49 PM, martin f krafft wrote:
 As far as I can tell, IPsec's ESP has the functionality of
 authentication and integrity built in:
It depends on what you mean by built in.
 1) The RFC provides for ESP+authentication but
does not require ESP to use authentication.
 2) Although the RFC allows ESP without
authentication, typical implementations are
less flexible.  In FreeS/WAN for instance, if
you ask for ESP will get ESP+AH.
ESP without authentication may be vulnerable to
replay attacks and/or active attacks that tamper
with the bits in transit.  The degree of vulnerability
depends on details (type of chaining, higher-level
properties of payload, ...).
Remember that encryption and authentication perform
complimentary roles:  Suppose Alice is sending to
Bob.  They are being attacked by Eve.  Encryption
limits the amount of information _Eve_ receives.
Authentication prevents tampering, so _Bob_ can
trust what he receives.
It is possible to construct situations where you
could omit the AH from ESP+AH without losing
anything, but you would need to analyze the
situation pretty carefully.  If you have a good
reason for using something other than ESP+AH,
please clarify what you want to do and why.
Otherwise just go with the normal ESP+AH.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]