Re: Choosing an implementation language

2003-10-03 Thread Eric Rescorla
Tyler Close <[EMAIL PROTECTED]> writes:

> On Thursday 02 October 2003 09:21, Jill Ramonsky wrote:
> > I was thinking of doing a C++ implentation with classes and
> > templates and stuff.  (By contrast OpenSSL is a C
> > implementation). Anyone got any thoughts on that?
> 
> Given the nature of recent, and past, bugs discovered in the
> OpenSSL implementation, it makes more sense to implement in a
> memory-safe language, such as python, java or squeak. Using a VM
> hosted language will limit the pool of possible users, but might
> create a more loyal user base.

There's already a Java SSL with a simple API:
http://www.rtfm.com/puretls/

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


how to defeat MITM using plain DH, Re: anonymous DH & MITM

2003-10-03 Thread Ed Gerck
Anton Stiglic wrote:

> That's false.  Alice and Bob can follow the basic DH protocol, exactly, but
> Mallory is in the middle, and what you end up with is a shared key between
> Alice and Bob and Mallory.

No. What you get is a shared key between Bob and Mallory and *another* shared
key between Alice and Mallory. This is important for many reasons.

First, it provides a way to detect that a MITM attack has occurred. For example,
if the MITM is not there at any time forth after key agreement, the DH-based 
encryption/decryption will not work since Alice and Bob did NOT share a
secret key when under the MITM attack. As another example, if Alice and Bob can
communicate using another channel even an ongoing MITM attack can be likewise
discovered.

Second, and most importantly, this provides a provable way to defeat MITM using
plain DH. For a set of communication channels, not necessarily 100% independent
from each other, if the probability of successfully mounting a MITM attack is
a(i) < 1 for each channel i, then by using N channels of communication we can
make the probability of a successful MITM attack as small as we desire and, thus,
defeat a MITM attack even using plain DH [1]. Moreover, this method can present
an increasing challenge to Mallory's computing resources and timing, such that
the probability a(i) itself should further decrease with more channels. In other
words, Mallory can only juggle so many balls. I pointed this out some years ago at
the MCG list. It's possible to have at least one open and anonymous protocol
immune to MITM -- which I called multi-channel DH.

Cheers,
Ed Gerck


[1] In a stronger form, we can allow the probability of successfully mounting a
MITM attack to be a(i) = 1 for all except for one channel in the set and still can
make the probability of a succesfull MITM attack as small as we desire, so that
we can still defeat a MITM attack using plain DH.




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Jerrold Leichter
| Date: Fri, 03 Oct 2003 17:27:36 -0400
| From: Tim Dierks <[EMAIL PROTECTED]>
| To: Jerrold Leichter <[EMAIL PROTECTED]>
| Cc: Cryptography list <[EMAIL PROTECTED]>
| Subject: Re: anonymous DH & MITM
|
| At 03:28 PM 10/3/2003, Jerrold Leichter wrote:
| >From: Tim Dierks <[EMAIL PROTECTED]>
| >| >No; it's false.  If Alice and Bob can create a secure channel between
| >| >themselves, it's reasonable to say that they are protected from MITM
| >| >attacks if they can be sure that no third party can read their messages.
| >| >That is: If Alice and Bob are anonymous, they can't say *who* can read
| >| >the messages they are sending, but they might be able to say that,
| >| >assuming that their peer is following the protocol exactly (and in
| >| >particular is not releasing the shared secret) *exactly one other party*
| >| >can read the message.
| >|
| >| They've got exactly that same assurance in a MITM situation:
| >| >unfortunately,
| >| Mallet is the one other party who can read the message.
| >But Mallet is violating a requirement:  He is himself passing along the
| >information Alice and Bob send him to Bob and Alice.  No notion of secrecy
| >can make any sense if one of the parties who legitimately *has* the secret
| >chooses to pass it along to someone else!
|
| In an authenticated protocol, you can have a risk model which includes the
| idea that an authorized person will not choose to share a secret with an
| unauthorized person. However, in an anonymous protocol, you cannot have
| such a risk model, because there's no such distinction.
Why not?

| Are you saying that you're defining a protocol with rules of behavior which
| cannot be enforced (namely that Mallet is breaking the protocol by
| forwarding the data)?
They can't be *enforced*, but violations can (perhaps) be detected.

|   Previously, you said that you were defining the thing
| to be controlled as the shared secret, but now you're extending it to any
| and all content transmitted over the link.
The shared secret is the session key.  Assuming the encryption is sufficient,
the security of this shared secret implies the security of all data exchanged
on the link.

|Describing the format of
| communications between parties is in a "protocol"; what they do with those
| communications is in a "risk model" or "trust model".

| >As long as Mallet continues to interpose himself in *all* subsequent
| >sessions between Alice and Bob, he can't be detected.  But suppose each of
| >them keeps a hash value that reflects all the session keys they think they
| >ever used in talking to each other.  Every time they start a session, they
| >exchange hashes.  Whenever Mallet is present, he modifies the messages to
| >show the hash values for the individual sessions that he held with each
| >party seperately.  Should they ever happen to form a session *without*
| >Mallet, however, the hashes will not agree, and Mallet will have been
| >detected.  So the difference isn't just notional - it's something the
| >participants can eventually find out about.
|
| No disagreement with this: if you can ever communicate over an
| unintermediated channel, you can detect previous or future intermediations.
"Every being able to communicate over an unintermediated channel" can mean
two things:

1.  We can communicate over such a channel *and know we are doing so*.
In that case, we can safely exchange keys, check our previous
communications, etc.

2.  We sometimes communicate over such a channel, but we can't
recognize when it happens.

Case 2 is much weaker than case 1, but is sufficient to detect that Mallet
has been playing games.  In fact, even case 2 is stronger than needed:
Suppose that there are multiple Mallet_i playing MITM.  Any given connection
may go through any subset of the Mallets.  Any time a connection happens
not to go through a particular either Mallet that "usually" talks directly
to Alice or Bob, the game is up.

| There are easier ways to do it than maintaining a hash of all session keys:
| you can just insist that the other party have the same public key they had
| the first time you spoke, and investigate if that becomes untrue (for
| example, ssh's authentication model).
Sure.

| >In fact, if we assume there is a well-known "bulletin board" somewhere, to
| >which anyone can post but on which no one can modify or remove messages, we
| >can use it as to force a conversation without Mallet.  Alice and Bob can:
| >
| >  [...elided...]
| >
| >If not, Mallet was at work.  (For this to work, the bulletin must have a
| >verifiable identity - but it's not necessary for anyone to identify himself
| >to the bulletin board.)
|
| This can be defeated by Mallet if he makes changes in his forwarding of
| communications (that either have no semantic effect or have whatever
| semantic effect he chooses to impose), but which causes the hashes to

Re: anonymity +- credentials

2003-10-03 Thread bear


On Fri, 3 Oct 2003, John S. Denker wrote:

>We need a practical system for anonymous/pseudonymous
>credentials.  Can somebody tell us, what's the state of
>the art?  What's currently deployed?  What's on the
>drawing boards?

The state of the art, AFAIK, is Chaum's credential system.

One important thing to remember about any pseudonymous credentials is
that they can't possibly say anything bad about the holder more
important than what they say that's good.  If it isn't better to have
them than not have them, the holder will just abandon them.

This applies most strongly to pseudonymous credentials, because
pseudonymous systems are typically a lot easier to create a new
credential with and the cost of credential abandonment is lower.  But
this doesn't just apply to pseudonymous credentials.  People treat
even the "absolute identity" credentials exactly the same way, when
"is-a-citizen" and "is-a-person" and other fundamentals are no longer
more important than "is subject to involuntary military service" or
"is wanted by the FBI" or "Convicted an abortion clinic bomber" or
"Testified against the Mafia" or "Was one of the protesters at
Tiannanmen Square."

Basically, when your credential gives people (enemies of the state or
servants of the state, makes no difference) a reason to want to kill
you, or otherwise do you harm, you have to analyze keeping that
credential in terms of risks and benefits. Pseudonymity brings this
aspect of identity credentials to the fore, but it doesn't begin and
end with pseudonymity.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Choosing an implementation language

2003-10-03 Thread Thor Lancelot Simon
On Fri, Oct 03, 2003 at 04:31:26PM -0400, Tyler Close wrote:
> On Thursday 02 October 2003 09:21, Jill Ramonsky wrote:
> > I was thinking of doing a C++ implentation with classes and
> > templates and stuff.  (By contrast OpenSSL is a C
> > implementation). Anyone got any thoughts on that?
> 
> Given the nature of recent, and past, bugs discovered in the
> OpenSSL implementation, it makes more sense to implement in a
> memory-safe language, such as python, java or squeak. Using a VM

I strongly disagree.  While an implementation in a typesafe language
would be nice, such implementations are already available -- one's
packaged with Java, for instance.

>From my point of view, the starting point of this discussion could be
restated as "The world needs a simple, portable SSL/TLS implementation 
that's not OpenSSL, because the size and complexity of OpenSSL has been 
responsible for slowing the pace of SSL/TLS deployment and for a large 
number of security holes."

For practical purposes, if such an implementation is to be useful to
the majority of the people who would use it to build products in the
real world, it needs to be in C or _possibly_ C++; those are the only
languages for which compilers *and* runtime environments exist
essentially everywhere.  Coming from a background building routers and
things like routers, I can also tell you that if you're going to
require carrying a C++ runtime around, a lot of people building embedded
devices will simply not give you the time of day.

An implementation in a safe language would be _nice_, but religion
aside (please!) it's a cold hard fact that very few products that
people actually use are written in such languages -- if you leave Java
(which already has an SSL implementation) out, "very few" becomes
"essentially zero".  And if we're interested in improving the security
of not only our pet projects, but of the interconnected world in
general, it seems to me that producing a good, simple, comprehensible,
small implementation *and getting it into as many products as possible*
would be one of the better possible goals to work towards.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Choosing an implementation language

2003-10-03 Thread Scott Guthery
Ah, the joys of diversity.  Implementations
of all your favorite protocols in all your
favorite programming languages by all your
favorite programmers in all your favorite
countries on all your favorite operating
systems for all your favorite chips.  

Continuous debugging certainly is the path 
to secure computing.

Cheers, Scott

-Original Message-
From: Tyler Close [mailto:[EMAIL PROTECTED]
Sent: Friday, October 03, 2003 4:31 PM
To: [EMAIL PROTECTED]
Subject: Choosing an implementation language


On Thursday 02 October 2003 09:21, Jill Ramonsky wrote:
> I was thinking of doing a C++ implentation with classes and
> templates and stuff.  (By contrast OpenSSL is a C
> implementation). Anyone got any thoughts on that?

Given the nature of recent, and past, bugs discovered in the
OpenSSL implementation, it makes more sense to implement in a
memory-safe language, such as python, java or squeak. Using a VM
hosted language will limit the pool of possible users, but might
create a more loyal user base.

I know the squeak community  does not have
SSL and would very much like to have it. An implementation of SSL
in squeak would also be of interest to the Squeak-E project,
related to the E project .

Tyler

-- 
The union of REST and capability-based security:
http://www.waterken.com/dev/Web/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Tim Dierks
At 03:28 PM 10/3/2003, Jerrold Leichter wrote:
From: Tim Dierks <[EMAIL PROTECTED]>
| >No; it's false.  If Alice and Bob can create a secure channel between them-
| >selves, it's reasonable to say that they are protected from MITM attacks if
| >they can be sure that no third party can read their messages.  That is:
| >If Alice and Bob are anonymous, they can't say *who* can read the messages
| >they are sending, but they might be able to say that, assuming that their
| >peer is following the protocol exactly (and in particular is not releasing
| >the shared secret) *exactly one other party* can read the message.
|
| They've got exactly that same assurance in a MITM situation: unfortunately,
| Mallet is the one other party who can read the message.
But Mallet is violating a requirement:  He is himself passing along the
information Alice and Bob send him to Bob and Alice.  No notion of secrecy
can make any sense if one of the parties who legitimately *has* the secret
chooses to pass it along to someone else!
In an authenticated protocol, you can have a risk model which includes the 
idea that an authorized person will not choose to share a secret with an 
unauthorized person. However, in an anonymous protocol, you cannot have 
such a risk model, because there's no such distinction.

Are you saying that you're defining a protocol with rules of behavior which 
cannot be enforced (namely that Mallet is breaking the protocol by 
forwarding the data)? Previously, you said that you were defining the thing 
to be controlled as the shared secret, but now you're extending it to any 
and all content transmitted over the link. Describing the format of 
communications between parties is in a "protocol"; what they do with those 
communications is in a "risk model" or "trust model".

As long as Mallet continues to interpose himself in *all* subsequent sessions
between Alice and Bob, he can't be detected.  But suppose each of them keeps
a hash value that reflects all the session keys they think they ever used in
talking to each other.  Every time they start a session, they exchange hashes.
Whenever Mallet is present, he modifies the messages to show the hash values
for the individual sessions that he held with each party seperately.  Should
they ever happen to form a session *without* Mallet, however, the hashes
will not agree, and Mallet will have been detected.  So the difference isn't
just notional - it's something the participants can eventually find out about.
No disagreement with this: if you can ever communicate over an 
unintermediated channel, you can detect previous or future intermediations. 
There are easier ways to do it than maintaining a hash of all session keys: 
you can just insist that the other party have the same public key they had 
the first time you spoke, and investigate if that becomes untrue (for 
example, ssh's authentication model).

In fact, if we assume there is a well-known "bulletin board" somewhere, to
which anyone can post but on which no one can modify or remove messages, we
can use it as to force a conversation without Mallet.  Alice and Bob can:
 [...elided...]

If not, Mallet was at work.  (For this to work, the bulletin must have a
verifiable identity - but it's not necessary for anyone to identify himself to
the bulletin board.)
This can be defeated by Mallet if he makes changes in his forwarding of 
communications (that either have no semantic effect or have whatever 
semantic effect he chooses to impose), but which causes the hashes to vary. 
He then posts statements re: his communications with each of Alice & Bob, 
so they'll see a match.

Or, alternately, he interposes himself between Alice & Bob and the bulletin 
board, which is possible within most understandings of the MITM threat. 
(Again, if Mallet can't do that, it implies that Alice & Bob have an 
unintermediated channel available: the bulletin board).

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Monoculture / Guild

2003-10-03 Thread lrk
On Thu, Oct 02, 2003 at 03:34:35PM -0700, John Gilmore wrote:
> > ... it does look very much from the outside that there is an
> > informal "Cryptographers Guild" in place...
> 
> The Guild, such as it is, is a meritocracy; many previously unknown
> people have joined it since I started watching it in about 1990.
> 
> The way to tell who's in the Guild is that they can break your protocols
> or algorithms, but you can't break theirs.

The problem with guilds is that they become set in their ways. Ask here
how the fact that "not all large numbers are hard to factor" affects RSA
and you will be ignored or dismissed. Ask whether cubic meters of special
hardware could brute-force keys better than the same cubic meters of super
computers and you get the same.

As a perineal outsider, I notice this in several fields. I'm not in the
guild for measuring the Specific Gravity of Gases. Which is precisely why
my name is on the patent for the smallest machine (4,677,841).


-- 
-
| Lyn KennedyE-mail   | [EMAIL PROTECTED]   |
| K5QWB  ICBM | 32.5 North 96.9 West|
---Livin' on an information dirt road a few miles off the superhighway---

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Jerrold Leichter
| From: Anton Stiglic <[EMAIL PROTECTED]>
| From: "Jerrold Leichter" <[EMAIL PROTECTED]>
| > No; it's false.  If Alice and Bob can create a secure channel between
| > themselves, it's reasonable to say that they are protected from MITM
| > attacks if they can be sure that no third party can read their messages.
|
| How do they create the secure channel in the first place?  We are talking
| about MITM that takes place during the key agreement protocol.
I didn't say I had a protocol that would accomplish this - I said that the
notion was such a protocol was not inherently self-contradictory.

| > That is: If Alice and Bob are anonymous, they can't say *who* can read the
| > messages they are sending, but they might be able to say that, assuming
| > that their peer is following the protocol exactly (and in particular is
| > not releasing the shared secret) *exactly one other party* can read the
| > message.
|
| That's false.  Alice and Bob can follow the basic DH protocol, exactly, but
| Mallory is in the middle, and what you end up with is a shared key between
| Alice and Bob and Mallory.
There's nothing to be true or false:  It's a definition!  (And yes, DH does
not provide a system that meets the definition.)

| The property you are talking about, concerning the *exactly one other party*
| can read the message is related to the *key authentication* property,
| discussed in [1] (among other places), which enables you to construct
| authenticated key agreements.
The reference was missing; I'd be interested in seeing it.

| >
| > Note that if you have this, you can readily bootstrap pseudonymity:  Alice
| > and Bob simply use their secure channel to agree on a shared secret, or on
| > pseudonyms they will henceforth use between themselves.  If there were a
| > MITM, he could of course impersonate each to the other ever afterward.
|
| But how do they share the initial secret?
I have no idea!

|And with true anonymity you don't
| want linkability.  Pseudonymity is a different thing, with pseudonymity you
| have linkability.
If Alice and Bob wish to establish pseudonyms for future use, they can.  No
one says they have to.  On the other hand, "linkability" is a funny property.
If Alice and Bob each keep their secrets, and they each believe the other
party keeps their secrets, then if there is *anything* unique in their
conversations with each other that they keep around - like the sessions keys,
or the entire text of the conversation - they can use *that* to link future
conversations to past ones.  (No one without access to the secrets can do
that, of course.)  If you define anonymity as complete lack of linkability,
even to the participants, you're going to end up requiring all participants to
forget, not just their session keys, but everything they learned in their
conversations.  Perhaps there are situations where that's useful, but they
strike me as pretty rare.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Strong-Enough Pseudonymity as Functional Anonymity

2003-10-03 Thread R. A. Hettinga
At 2:32 PM -0400 10/3/03, John S. Denker wrote:

>  -- anonymous (no handle all)

If they don't know who I am, I'm anonymous, whether I use a pseudonym or not. 

However, the more "perfect" the pseudonym is, the more "secure" it is, the more 
anonymous I am.

All of the "anonymous" payment protocols I know of involve using a public/private key 
"signature", persistent or not. That's a pseudonym, by most definitions of the term.  
Blinding gives you anonymity, of a sort, unless in some protocols, you double-"spend", 
and your key is revealed. Even then it's only your key which is blackballed, not you.

Sure, you can "front" keys, "mix" keys, whatever, but you're still relying on a 
pseudonym, and people even call *those* methods "anonymous".

As to real-life definitions of "anonymous" or not, it seems to me that technical 
professions (guilds :-)) use more precise language than laymen do all the time.

Again, the more perfect a pseudonym is, the more anonymous it is.

We get, at the very least, functional anonymity for most things we're interested in.  

Cheers,
RAH

-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


threat modelling strategies

2003-10-03 Thread Ian Grigg
"Arnold G. Reinhold" wrote:
> 
> At 11:50 PM -0400 10/1/03, Ian Grigg wrote:
> >...
> >A threat must occur sufficiently in real use, and incur
> >sufficient costs in excess of protecting against it, in
> >order to be included in the threat model on its merits.
> >
> 
> I think that is an excellent summation of the history-based approach
> to threat modeling. There is another approach, however,
> capability-based threat modeling. What attacks will adversaries whom
> I reasonably expect to encounter mount once the system I am
> developing is deployed? Military planners call this the "responsive
> threat."  There are many famous failures of history-based threat
> modeling: tanks vs. cavalry, bombers vs. battleships, vacuum tubes
> vs. electromechanical cipher machines, box cutters vs skyscrapers,
> etc.


A very nice distinction.  The problem with this approach
is that it depends heavily on the notion of "reasonably
expect," which is highly obvious, after the fact.

In each of those cases, it was possible to trace the
development of the attack through history, again,
after the fact [1], [2], [3].

In each case, the history was mostly readable.  Just
like security today.  In each case, it was very difficult
to predict the future.  And, for those lucky few who
did, they were ignored.  And, for those lucky few
who did predict correctly, there were many score more
who predicted the wrong thing.

Military affairs are fairly typecast.  You are stuck
with the weapons of the past, chasing an infinite
number of possibilities in the future.  In all that,
you have to fight the current war.  Prepare for some
unlikely future at your peril.  If you pick the wrong
one, you'll be accused of being a dreamer, or of
fighting the last war.  Pick a future that actually
happens, and you'll be called a genius.

Crypto systems get pretty much deployed like that
as well.  Reasonable threat models are built up,
a point in the future is aimed for, and the system
gets deployed.  Then, you hope that attacks like
that of Adi Shamir's student don't happen until
the very end of life.  You watch, and you hope.


> In the world of the Internet the time available to put in place
> counteract new threats once they are publicized appears to be
> shrinking rapidly. And we are only seeing one class of adversaries:
> the informal network of hackers. For the most part, they have not
> tried to maximize the damage they cause. There is another class,
> hostile governments and terrorists, who have so far not shown their
> hands but are presumably following developments closely.  I don't
> think we can restrict ourselves to threats already proven in the wild.


The alternate is to prepare for every possible
threat.  That's hard.  It may be that you can
justify this level of expenditure, but for most
ordinary missions, this is simply too expensive.

Mind you, I'm not sure of your first claim there,
can you explain why the security field has not
moved quickly to counter the threat of web site
spoofing?  It's been around for yonks, and it's
resulting in losses

> Then there is the matter of costs and who pays them. Industry is
> often willing to absorb small costs, or, better, fob them off onto
> consumers. Moderate costs can be insured against or written off as
> "extraordinary expenses." Stockholders are shielded from the full
> impact of catastrophic costs by the bankruptcy laws and can sometimes
> even get governments to subsidize such losses.
> 
> Perhaps guilds are the right model for cryptography. At their best,
> guilds preserve knowledge and uphold standards that would otherwise
> be ignored by market forces. Anyone out there willing to have open
> heart surgery performed by someone other than a member of the
> surgeon's guild?

Anyone out there willing to send a chat message
that is protected by ROT13?

As we have defined our mission, we can set our
requirements, and build our threat model.

I don't see that the presence of huge costs in
some exotic industries means the rest of us have
to pay for heart surgery every time we want to
send a chat message.  Or face death threats every
time we pay for flowers with a credit card.

But, I grant you that FUD will play a part in
the ongoing evolution of the Cryptologists'
Guild, just as it has in the past.  It's too
powerful a card to ignore, just because it is
unscientific.

YMMV :-)

iang

[1] Although Guderian's development of Blitzkreig was
kept a secret, as was all German war planning, it wasn't
totally unemulated by the Allies, just not up-played
as well as it might have been &.  C.f., Patton, who
famously "read Rommel's book," and de Gaulle, who
parlied a presidency out of his success at holding
back the Guderian advances, albeit briefly.

In fact, the French tanks outnumbered, outgunned, and
out armoured the Germans,  The Versaille Treaty
banned Germany from having *any* armoured vehicles.

That's preparation!

& _Panzer Leader_, General Heinz Guderian, 1952.


[2] box cutters v. skyscrapers -

Choosing an implementation language

2003-10-03 Thread Tyler Close
On Thursday 02 October 2003 09:21, Jill Ramonsky wrote:
> I was thinking of doing a C++ implentation with classes and
> templates and stuff.  (By contrast OpenSSL is a C
> implementation). Anyone got any thoughts on that?

Given the nature of recent, and past, bugs discovered in the
OpenSSL implementation, it makes more sense to implement in a
memory-safe language, such as python, java or squeak. Using a VM
hosted language will limit the pool of possible users, but might
create a more loyal user base.

I know the squeak community  does not have
SSL and would very much like to have it. An implementation of SSL
in squeak would also be of interest to the Squeak-E project,
related to the E project .

Tyler

-- 
The union of REST and capability-based security:
http://www.waterken.com/dev/Web/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Jerrold Leichter
| From: Tim Dierks <[EMAIL PROTECTED]>
| >No; it's false.  If Alice and Bob can create a secure channel between them-
| >selves, it's reasonable to say that they are protected from MITM attacks if
| >they can be sure that no third party can read their messages.  That is:
| >If Alice and Bob are anonymous, they can't say *who* can read the messages
| >they are sending, but they might be able to say that, assuming that their
| >peer is following the protocol exactly (and in particular is not releasing
| >the shared secret) *exactly one other party* can read the message.
|
| They've got exactly that same assurance in a MITM situation: unfortunately,
| Mallet is the one other party who can read the message.
But Mallet is violating a requirement:  He is himself passing along the
information Alice and Bob send him to Bob and Alice.  No notion of secrecy
can make any sense if one of the parties who legitimately *has* the secret
chooses to pass it along to someone else!

| If you extend the
| concept to say "but I want Bob to be the one who can read the message",
| you've discarded anonymity. And saying that "I want only one party to have
| access to my message" is digital rights management.
Yes - but an interactive form of it.

| >Note that if you have this, you can readily bootstrap pseudonymity:  Alice
| >and Bob simply use their secure channel to agree on a shared secret, or on
| >pseudonyms they will henceforth use between themselves.  If there were a
| >MITM, he could of course impersonate each to the other ever afterward.
|
| Even if you could make this assertion, how would you avoid something that
| I'll call the "Cyrano attack": that the person you're communicating with is
| not, in fact, the source of the witticisms you associate with his
| pseudonym? And how is that attack distinct from MITM?
As long as Mallet continues to interpose himself in *all* subsequent sessions
between Alice and Bob, he can't be detected.  But suppose each of them keeps
a hash value that reflects all the session keys they think they ever used in
talking to each other.  Every time they start a session, they exchange hashes.
Whenever Mallet is present, he modifies the messages to show the hash values
for the individual sessions that he held with each party seperately.  Should
they ever happen to form a session *without* Mallet, however, the hashes
will not agree, and Mallet will have been detected.  So the difference isn't
just notional - it's something the participants can eventually find out about.

In fact, if we assume there is a well-known "bulletin board" somewhere, to
which anyone can post but on which no one can modify or remove messages, we
can use it as to force a conversation without Mallet.  Alice and Bob can:

- Compute a hash code H over the entire conversation, concatenated
with the session key.

- Post to the bulletin board "I just had a conversion with hash code
H"

- Check that, within a short time, there are exactly two postings with
the same H.

If not, Mallet was at work.  (For this to work, the bulletin must have a
verifiable identity - but it's not necessary for anyone to identify himself to
the bulletin board.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>

> [...]
> | > I think it's a tautology: there's no such thing as MITM if there's no
such
> | > thing as identity. You're talking to the person you're talking to, and
> | > that's all you know.
> |
> | That seems to make sense
> No; it's false.  If Alice and Bob can create a secure channel between
them-
> selves, it's reasonable to say that they are protected from MITM attacks
if
> they can be sure that no third party can read their messages.

How do they create the secure channel in the first place?  We are talking
about
MITM that takes place during the key agreement protocol.

> That is:
> If Alice and Bob are anonymous, they can't say *who* can read the messages
> they are sending, but they might be able to say that, assuming that their
> peer is following the protocol exactly (and in particular is not releasing
the
> shared secret) *exactly one other party* can read the message.

That's false.  Alice and Bob can follow the basic DH protocol, exactly, but
Mallory is in the middle, and what you end up with is a shared key between
Alice and Bob and Mallory.
The property you are talking about, concerning the *exactly one other party*
can read the message is related to the *key authentication*  property,
discussed
in [1] (among other places), which enables you to construct authenticated
key
agreements.

>
> Note that if you have this, you can readily bootstrap pseudonymity:  Alice
> and Bob simply use their secure channel to agree on a shared secret, or on
> pseudonyms they will henceforth use between themselves.  If there were a
> MITM, he could of course impersonate each to the other ever afterward.

But how do they share the initial secret?  And with true anonymity you don't
want linkability.  Pseudonymity is a different thing, with pseudonymity you
have
linkability.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: DH with shared secret

2003-10-03 Thread Trevor Perrin
At 05:13 AM 10/3/2003 -0400, Jack Lloyd wrote:

This was just something that popped into my head a while back, and I was
wondering if this works like I think it does. And who came up with it
before me, because it's was too obvious. It's just that I've never heard of
something alone these lines before.
Basically, you share some secret with someone else (call it S).  Then you
do a standard issue DH exchange, but instead of the shared key being
g^(xy), it's g^(xyS)
But a bad guy MITM can try and verify guesses for S, so this is vulnerable 
to an offline dictionary attack.

[A bad guy server will choose y, and will receive g^x.  Now he can try 
guesses for S and see if the resulting g^(xyS) properly decrypts/verifies 
the client's confirmation message.]

The better approach is "DH-EKE": use S as a symmetric key, and exchange 
S(g^x), S(g^y).  No offline attacks, a bad guy only gets a single guess 
during the protocol run.

An ever better approach is SRP, where the server doesn't need to know the 
password but only a function of it.  There's even an I-D for doing it with 
TLS -
http://www.ietf.org/internet-drafts/draft-ietf-tls-srp-05.txt

This would be a great way of doing password auth in protocols like 
POP/IMAP, HTTP, and elsewhere, since it mutually authenticates both parties 
based only on the password.

Only one implementation right now (gnuTLS in the CVS version), but 
hopefully that will change soon.

Trevor 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Taral
On Fri, Oct 03, 2003 at 02:16:22PM -0400, Jerrold Leichter wrote:
> The Interlock Protocol doesn't provide this - it prevents the MITM from
> modifying the exchanged messages, but can't prevent him from reading them.
> It's not clear if it can be achieved at all.  But it does make sense as a
> security spec.

Hardly. Just perform DH exchange over the interlock protocol. By your
own admission, the MITM could not change the factors being exchanged,
and by DH, the MITM cannot then know what the resulting key data is.

-- 
Taral <[EMAIL PROTECTED]>
This message is digitally signed. Please PGP encrypt mail to me.
"Be who you are and say what you feel, because those who mind don't
matter and those who matter don't mind." -- Dr. Seuss


signature.asc
Description: Digital signature


Re: anonymous DH & MITM

2003-10-03 Thread Ian Grigg
"R. A. Hettinga" wrote:
> 
> At 2:16 PM -0700 10/2/03, bear wrote:
> >That's not anonymity, that's pseudonymity.
> 
> It seems to me that perfect pseudonymity *is* anonymity.

Conventionally, I think, Anonymity is when one
publishes a pamphlet of political criticism, and
there is no name on the pamphlet.

When the same person publishes a second pamphlet,
there is nothing to connect the two.  (Other than
style, of course.)

Psuedonymity would result if "Whielacronx" were
to appear on both pamphlets, so the readers could
establish a reputational link between the pamplets.

Anonymity doesn't support a connection.  Now, I
think there is value in trying to use these terms
as much as possible in alignment with their old
world roots.  But that might not always be possible

So, in this sense, on the net, it is impossible
to open a connection anonymously.  The TCP/IP
connection system requires a source IP number,
and then allocates a port.  So a psuedonym of
IP/port gets allocated for the length of the
connection.

> Frankly, without the ability to monitor reputation, you don't have ways of 
> controlling things like transactions, for instance.


Bearer tokens normally achieve untraceability, in
the pure technical sense.  As most bearer systems
include a conventional identity based account of
some form, the notion of anonymity is confusing,
as certain actions can reverse the untraceability
and reveal the identities of the accounts.  E.g.,
double spending.

Of course, in the media and literature, anonymity
is widely used to refer to untraceable bearer
tokens.  As anonymity isn't so useful in its own
right, there appears to be few real problems with
this usage, until one starts bandying around more
than one concept.

Psuedonymous systems for transactions are normally
the reverse:  traceable, but one can only see the
chosen psuedonym, and that is not directly related
to any other info that might be useful.


But you knew all that :-)


> Who's also curious about exactly what "Whielacronx" means... ;-).

My guess is that it is what Zooko's son says when he
is learning his name.

iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Roy M. Silvernail
iang wrote:
> 
> Jill Ramonsky wrote:

> > It's worth summing up the design goals here, so nobody gets confused.
> > Trouble is, I haven't figured out what they should all be. The main
> > point of confusion/contention right now seem to be (1) should it be in C
> > or C++?,
> 
> C.  And write C++ wrappers or let someone else do it.

Yes!  Speaking from experience, it's far easier to write a C++ wrapper 
for a C lib than the other way around.  And as Ian said, it's probably
easier to get the implementation correct in C, at least as a first pass.
-- 
Roy M. Silvernail is [EMAIL PROTECTED], and you're not
http://www.rant-central.com is the new scytale
Never Forget:  It's Only 1's and 0's!
SpamAssassin->procmail->/dev/null->bliss

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Steven M. Bellovin
In message <[EMAIL PROTECTED]>, Benja Fallenstein writes:
>
>Hi,
>
>bear wrote:
>starting with Rivest & Shamir's Interlock Protocol from 1984.

Hmmm.  I'll go read, and thanks for the pointer.
>> 
>> Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
>> which are on my shelf.  Where was it published?
>
>Communications of the ACM: Rivest and
>Shamir, "How to expose an eavesdropper", CACM vol 24 issue 4, 1984. If 
>you have an ACM Digital Library account, it's at
>
>http://portal.acm.org/ft_gateway.cfm?id=358053&type=pdf&coll=ACM&dl=ACM&CFID=1
>2683735&CFTOKEN=40809148
>
>I've started writing a short summary earlier today, after reading, but 
>then I got distracted and didn't have time... sorry :) Hope this helps 
>anyway.
>
>The basic idea is that Alice sends *half* of her ciphertext, then Bob 
>*half* of his, then Alice sends the other half and Bob sends the other 
>half (each step is started only after the previous one was completed). 
>The point is that having only half of the first ciphertext, Mitch can't 
>decrypt it, and thus not pass on the correct thing to Bob in the first 
>step and to Alice in the second, so both can actually be sure to have 
>the public key of the person that made the other move.
>

You have to be careful how you apply it; sometimes, there are attacks.  
See Steven M. Bellovin and Michael Merritt, "An Attack on the Interlock
Protocol When Used for Authentication," in IEEE Transactions on
Information Theory 40:1, pp. 273-275, January 1994,
http://www.research.att.com/~smb/papers/interlock.ps for an example of 
how it's a bad protocol to use to send passwords.  

--Steve Bellovin, http://www.research.att.com/~smb


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


anonymity +- credentials

2003-10-03 Thread John S. Denker
On 10/03/2003 01:26 PM, R. A. Hettinga wrote:
>
> It seems to me that perfect pseudonymity *is* anonymity.
They're not quite the same thing; see below.

> Frankly, without the ability to monitor reputation, you don't have
> ways of controlling things like transactions, for instance. It's just
> that people are still mystified by the concept of biometric
> is-a-person identity, which strong cryptography can completely
> divorce from reputation.
We agree that identification is *not* the issue, and
that lots of people are confused about this.
I'm not sure "reputation" is exactly the right concept
either;  the notion of "credentials" is sometimes better,
and the operating-systems folks speak of "capabilities".
There are three main possibilities:
 -- named (unique static handle)
 -- pseudonymous (dynamic handles)
 -- anonymous (no handle all)
Sometimes pseudonyms are more convenient than having no
handle at all.  It saves you the trouble of having to
re-validate your credentials at every micro-step of the
process (whatever the process may be).
Oftentimes pseydonyms are vastly preferable to a static
name, because you can cobble up a new one whenever you
like, subject to the cost of (re)establishing your
credentials from scratch.
The idea of linking (bidirectionally) all credentials
with the static is-a-person identity is a truly terrible
idea.  It dramatically *reduces* security.  Suppose Jane
Doe happens to have the following credentials
 -- Old enough to buy cigarettes.
 -- Has credit-card limit > $300.00
 -- Has credit-card limit > $3000.00
 -- Has car-driving privileges.
 -- Has commercial pilot privileges.
 -- Holds US citizenship.
 -- Holds 'secret' clearance.
When Jane walks into a seedy bar, someone can reasonably
ask to verify her "old-enough" credential.  She might
not want this query to reveal her exact age, and she
might *really* not want it to reveal her home address (as
many forms of "ID" do), and she might *really* *really*
not want it to reveal all her other credentials and
capabilities.
*) There is an exploding epidemic of "ID" theft.
That is a sure sign that people keep confusing
capability --> identity and identity --> capabilities.
*) There are those who want us to have a national ID-checking
infrastructure as soon as possible.  They think this will
increase security.  I think it is a giant step in the wrong
direction.
*) Reputation (based on a string of past interactions) is
one way, but not the only way, to create a credential that
has some level of trust.
=

We need a practical system for anonymous/pseudonymous
credentials.  Can somebody tell us, what's the state of
the art?  What's currently deployed?  What's on the
drawing boards?
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Tim Dierks
At 02:16 PM 10/3/2003, Jerrold Leichter wrote:
From: Anton Stiglic <[EMAIL PROTECTED]>
| From: "Tim Dierks" <[EMAIL PROTECTED]>
| > I think it's a tautology: there's no such thing as MITM if there's no such
| > thing as identity. You're talking to the person you're talking to, and
| > that's all you know.
|
| That seems to make sense
No; it's false.  If Alice and Bob can create a secure channel between them-
selves, it's reasonable to say that they are protected from MITM attacks if
they can be sure that no third party can read their messages.  That is:
If Alice and Bob are anonymous, they can't say *who* can read the messages
they are sending, but they might be able to say that, assuming that their
peer is following the protocol exactly (and in particular is not releasing the
shared secret) *exactly one other party* can read the message.
They've got exactly that same assurance in a MITM situation: unfortunately, 
Mallet is the one other party who can read the message. If you extend the 
concept to say "but I want Bob to be the one who can read the message", 
you've discarded anonymity. And saying that "I want only one party to have 
access to my message" is digital rights management.

Note that if you have this, you can readily bootstrap pseudonymity:  Alice
and Bob simply use their secure channel to agree on a shared secret, or on
pseudonyms they will henceforth use between themselves.  If there were a
MITM, he could of course impersonate each to the other ever afterward.
Even if you could make this assertion, how would you avoid something that 
I'll call the "Cyrano attack": that the person you're communicating with is 
not, in fact, the source of the witticisms you associate with his 
pseudonym? And how is that attack distinct from MITM?

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Arnold G. Reinhold
At 11:50 PM -0400 10/1/03, Ian Grigg wrote:
...
A threat must occur sufficiently in real use, and incur
sufficient costs in excess of protecting against it, in
order to be included in the threat model on its merits.
I think that is an excellent summation of the history-based approach 
to threat modeling. There is another approach, however, 
capability-based threat modeling. What attacks will adversaries whom 
I reasonably expect to encounter mount once the system I am 
developing is deployed? Military planners call this the "responsive 
threat."  There are many famous failures of history-based threat 
modeling: tanks vs. cavalry, bombers vs. battleships, vacuum tubes 
vs. electromechanical cipher machines, box cutters vs skyscrapers, 
etc.

In the world of the Internet the time available to put in place 
counteract new threats once they are publicized appears to be 
shrinking rapidly. And we are only seeing one class of adversaries: 
the informal network of hackers. For the most part, they have not 
tried to maximize the damage they cause. There is another class, 
hostile governments and terrorists, who have so far not shown their 
hands but are presumably following developments closely.  I don't 
think we can restrict ourselves to threats already proven in the wild.

Then there is the matter of costs and who pays them. Industry is 
often willing to absorb small costs, or, better, fob them off onto 
consumers. Moderate costs can be insured against or written off as 
"extraordinary expenses." Stockholders are shielded from the full 
impact of catastrophic costs by the bankruptcy laws and can sometimes 
even get governments to subsidize such losses.

Perhaps guilds are the right model for cryptography. At their best, 
guilds preserve knowledge and uphold standards that would otherwise 
be ignored by market forces. Anyone out there willing to have open 
heart surgery performed by someone other than a member of the 
surgeon's guild?

Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Jerrold Leichter
| Date: Fri, 3 Oct 2003 10:14:42 -0400
| From: Anton Stiglic <[EMAIL PROTECTED]>
| To: Cryptography list <[EMAIL PROTECTED]>,
|  Tim Dierks <[EMAIL PROTECTED]>
| Subject: Re: anonymous DH & MITM
|
|
| - Original Message -
| From: "Tim Dierks" <[EMAIL PROTECTED]>
|
| >
| > I think it's a tautology: there's no such thing as MITM if there's no such
| > thing as identity. You're talking to the person you're talking to, and
| > that's all you know.
|
| That seems to make sense
No; it's false.  If Alice and Bob can create a secure channel between them-
selves, it's reasonable to say that they are protected from MITM attacks if
they can be sure that no third party can read their messages.  That is:
If Alice and Bob are anonymous, they can't say *who* can read the messages
they are sending, but they might be able to say that, assuming that their
peer is following the protocol exactly (and in particular is not releasing the
shared secret) *exactly one other party* can read the message.

Note that if you have this, you can readily bootstrap pseudonymity:  Alice
and Bob simply use their secure channel to agree on a shared secret, or on
pseudonyms they will henceforth use between themselves.  If there were a
MITM, he could of course impersonate each to the other ever afterward.

The Interlock Protocol doesn't provide this - it prevents the MITM from
modifying the exchanged messages, but can't prevent him from reading them.
It's not clear if it can be achieved at all.  But it does make sense as a
security spec.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: DH with shared secret

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Jack Lloyd" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 5:13 AM
Subject: DH with shared secret


> This was just something that popped into my head a while back, and I was
> wondering if this works like I think it does. And who came up with it
> before me, because it's was too obvious. It's just that I've never heard
of
> something alone these lines before.
>
> Basically, you share some secret with someone else (call it S).  Then you
> do a standard issue DH exchange, but instead of the shared key being
> g^(xy), it's g^(xyS)

Not exactly the same thing, but you get the same properties:  SKEME.
See section 3.3.2, Pre-shared key and PFS, of
SKEME:  A Versatile Secure Key Exchange Mechanism for internet,
Hugo Krawczyk.
http://citeseer.nj.nec.com/krawczyk96skeme.html


--Anton

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Ian Grigg
Jill Ramonsky wrote:
> 
> Having been greatly encouraged by people on this list to go ahead with a
> new SSL implementation, it looks like I am going to go for it, but I'd
> kinda like to not make any enemies in the process so I'll try to keep
> this list up to date with progress and decisions and stuff ... and I
> will ask a lot of questions.


Don't worry about making enemies, they'll worry about
it for you :-)


> It's worth summing up the design goals here, so nobody gets confused.
> Trouble is, I haven't figured out what they should all be. The main
> point of confusion/contention right now seem to be (1) should it be in C
> or C++?,


C.  And write C++ wrappers or let someone else do it.

(IMHO.  I don't write much C++ but it seems to be
basically dangerous and difficult to get right.  Also,
you already have enough on your plate in securely
handling C, without having to worry about the built
in insecurities of C++ :-)


> (2) should it support SSL or TLS or both?


Just TLS, and only the compulsory parts.  Leave in
hooks for other parts right now.  Also, ditch the
Anon-DH mode, and stick to the cert model.

Again, IMHO.  Your market will be developers who want
a simple secure channel product.  Market in general
will not be people who already have to meet SSL as a spec,
those people will go with OpenSSL.  You want the green
field developers, and there is no reason to offer them
old stuff.


> Regarding the choice of language, I think I would want this library (or
> toolkit, or whatever) to be somehow different from OpenSSL - otherwise
> what's the point? I mean ... this may be a dumb question, but ... if
> people want C, can they not use the existing OpenSSL? Or is it simply
> that OpenSSL is too complicated to use, so a "simpler than OpenSSL" C
> version is required. What I mean is, I don't want to duplicate effort.
> That seems dumb.


OpenSSL is thought to be complex, less well documented,
hard to use.  I've not "used" it, but I've hacked it a
couple of times, and that's how I remember it.  You sort
of have to be a C programmer so you can muck in and figure
out what it is doing.

Also, OpenSSL provides everything.  But you have to know
how to configure everything up.  So you have to understand
the choices placed in front of you, or blindly follow the
lead of various examples.

So, an alternate approach is to set up one way of doing
everything, to give you one rather good, but not perfect,
connection product.

It may very well have improved an awful lot since I've
looked, who knows?  But, reports keep coming in that it
is too hard to get into...


>  My inclination is still to go with C++, and figure out
> a way of turning it into C later if necessary ... but if majority
> opinion says otherwise I'll reconsider.


Go with your gut feel.  Don't listen to the experts,
you'll never get a consistent viewpoint, and even the
ones that disagree will be wrong :-)


> Now - SSL or TLS -  If
> you want to implement only TLS (for example, in a closed private network
> where all parties are known to be using the same version of the same
> protocol), why should you have to lug around SSL as well? I suppose I
> /want/ the solution to be "allow the toolkit to generate either
> SSL-only, or TLS-only, or SSL+TLS" ... but what I'm not sure about is,
> is the "TLS-only" option forbidden by the standard?


My advice:  if the standard gets in your way, ignore it.

All standards are camels, and your job is to ride beasts
of burden, not the other way around.  Deliver product,
and don't let a bunch of horse designers push you around.

Create a single security product that talks just pure TLS.
The latest and greatest.  You will have enough to worry
about keeping track of future changes, let alone dealing
with ancient history.

Bear in mind that you are looking at a year-long project
here.  There is a reason why OpenSSL is the choice...
because it is already written.


> And now some questions about SSL/TLS itself

I'll think more on those, or quietly slink away without
appearing more dumb than normal.  All the above is IMHO,
so please try hard to ignore it!

iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Benja Fallenstein
Hi --

bear wrote:
On Thu, 2 Oct 2003, Zooko O'Whielacronx wrote:
R. L. Rivest and A. Shamir. How to expose an
eavesdropper. Communications of the ACM, 27:393-395, April 1984.
Ah.  Interesting, I see. It's an interesting application of a
bit-commitment scheme.
Ok, so my other mail came far too late to be useful to you ;-)

Why should this not be applicable to chess?  There's nothing to
prevent the two contestants from making "nonce" transmissions twice a
move when it's not their turn.
Maybe you have already a more advanced thing in mind than I do, but if 
your protocol would then look just like this--

- Alice sends first half of cyphertext of her move
- Bob sends first half of cyphertext of random nonce
- Alice sends second half
- Bob sends second half
and vice versa, consider this:

- Alice sends first half of cyphertext of her move (to Mitch)
- Mitch sends first half of cyphertext of random nonce (to Alice)
- Alice sends second half
- Mitch sends second half
- Mitch sends first half of cyphertext of Alice's move (to Bob)
- Bob sends first half of cyphertext of random nonce (to Alice)
...
I.e., you would need a protocol extension to verify the nonces somehow-- 
if that's possible at all-- or are you just faster than me, and have 
thought about a way to do that already?

Thx,
- Benja
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Guus Sliepen
On Fri, Oct 03, 2003 at 05:55:25PM +0100, Jill Ramonsky wrote:

> It's worth summing up the design goals here, so nobody gets confused. 
> Trouble is, I haven't figured out what they should all be. The main 
> point of confusion/contention right now seem to be (1) should it be in C 
> or C++?, (2) should it support SSL or TLS or both?

If the applications have to interact with legacy systems, then they'd
need SSL...

> Regarding the choice of language, I think I would want this library (or 
> toolkit, or whatever) to be somehow different from OpenSSL - otherwise 
> what's the point? I mean ... this may be a dumb question, but ... if 
> people want C, can they not use the existing OpenSSL? Or is it simply 
> that OpenSSL is too complicated to use, so a "simpler than OpenSSL" C 
> version is required. What I mean is, I don't want to duplicate effort. 
> That seems dumb.

OpenSSL is very large, and although the API is pretty consistent and
easy to work with, the SSL part of it looks complicated anyway. Another
thing that is very annoying about OpenSSL is its license (and this has
probably been an incentive to create GnuTLS).

> My inclination is still to go with C++, and figure out 
> a way of turning it into C later if necessary ... but if majority 
> opinion says otherwise I'll reconsider.

Well as long as your library has a decent C interface, I wouldn't mind
if it was written in C, C++, Haskell or something even stranger.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen <[EMAIL PROTECTED]>


signature.asc
Description: Digital signature


RE: DH with shared secret

2003-10-03 Thread Xunhua Wang
Your scheme might work for a long random secret. However, if the shared
secret is a short one (say a password), depending on how the key
confirmation is performed, it would still be vulnerable to off-line
dictionary attacks. More related information can be found at
http://grouper.ieee.org/groups/1363/passwdPK/index.html. Steve

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jack Lloyd
Sent: Friday, October 03, 2003 5:14 AM
To: [EMAIL PROTECTED]
Subject: DH with shared secret

This was just something that popped into my head a while back, and I was
wondering if this works like I think it does. And who came up with it
before me, because it's was too obvious. It's just that I've never heard
of
something alone these lines before.

Basically, you share some secret with someone else (call it S).  Then
you
do a standard issue DH exchange, but instead of the shared key being
g^(xy), it's g^(xyS)

My impression is that, unless you know S, you can't do a succesfull MITM

attack on the exchange. Additionaly, AFAICT, it provides PFS, since if 
someone later recovers S, there's still that nasty DH exchange to deal 
with. Of course after S is known MITM becomes possible.

Given the recent climate around here, I'll add that I'm not planning on
using this for anything (I only use TLS, I swear! :P), I just thought it
was an semi-interesting idea.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: using SMS challenge/response to secure web sites

2003-10-03 Thread Rich Salz
Now a company called NetPay.TV - I have no idea about
them, really - have started a service that sends out
a 6 digit pin over the SMS messaging features of the
GSM network for the user to type in to the website [4].
Authentify (http://www.authentify.com), does the same kind of thing. 
They put a number on a web page, and then they call you and you key in 
the number.  They were founded in 1999; not sure if they're still active.
	/r$
--
Rich Salz, Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway   http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: DH with shared secret

2003-10-03 Thread Eric Rescorla
Jack Lloyd <[EMAIL PROTECTED]> writes:

> This was just something that popped into my head a while back, and I was
> wondering if this works like I think it does. And who came up with it
> before me, because it's was too obvious. It's just that I've never heard of
> something alone these lines before.
> 
> Basically, you share some secret with someone else (call it S).  Then you
> do a standard issue DH exchange, but instead of the shared key being
> g^(xy), it's g^(xyS)
> 
> My impression is that, unless you know S, you can't do a succesfull MITM 
> attack on the exchange. Additionaly, AFAICT, it provides PFS, since if 
> someone later recovers S, there's still that nasty DH exchange to deal 
> with. Of course after S is known MITM becomes possible.
The problem with this protocol is that a single MITM allows 
a dictionary attack. There are better ways to do this.

Keywords: EKE, SRP, SPEKE

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread R. A. Hettinga
At 2:16 PM -0700 10/2/03, bear wrote:
>That's not anonymity, that's pseudonymity.

It seems to me that perfect pseudonymity *is* anonymity.

Frankly, without the ability to monitor reputation, you don't have ways of controlling 
things like transactions, for instance. It's just that people are still mystified by 
the concept of biometric is-a-person identity, which strong cryptography can 
completely divorce from reputation.

Cheers,
RAH
Who's also curious about exactly what "Whielacronx" means... ;-).
-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Benja Fallenstein
Hi,

bear wrote:
starting with Rivest & Shamir's Interlock Protocol from 1984.
Hmmm.  I'll go read, and thanks for the pointer.
Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
which are on my shelf.  Where was it published?
Communications of the ACM: Rivest and
Shamir, "How to expose an eavesdropper", CACM vol 24 issue 4, 1984. If 
you have an ACM Digital Library account, it's at

http://portal.acm.org/ft_gateway.cfm?id=358053&type=pdf&coll=ACM&dl=ACM&CFID=12683735&CFTOKEN=40809148

I've started writing a short summary earlier today, after reading, but 
then I got distracted and didn't have time... sorry :) Hope this helps 
anyway.

The basic idea is that Alice sends *half* of her ciphertext, then Bob 
*half* of his, then Alice sends the other half and Bob sends the other 
half (each step is started only after the previous one was completed). 
The point is that having only half of the first ciphertext, Mitch can't 
decrypt it, and thus not pass on the correct thing to Bob in the first 
step and to Alice in the second, so both can actually be sure to have 
the public key of the person that made the other move.

- Benja

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Simple SSL/TLS - Some Questions

2003-10-03 Thread Eric Rescorla
Jill Ramonsky <[EMAIL PROTECTED]> writes:
> Now - SSL or TLS - this confuses me. From what I've read in Eric's
> book, SSL version 3.0 or below is called SSL, wheras SSL version 3.1
> or above is called TLS.
I wouldn't use quite that terminology. Noone talks about SSL version
3.1, but rather TLS 1.0. However, if we're just speaking about what's
in the version numbers in the wire protocol you're right.

> Have I misunderstood that? In any case, I note
> the bit in Eric's book (p73 in my edition) where it says "In general,
> it is expected that an implementation speaks all lesser versions"
> ... even if lesser versions become known to be insecure. I'm not sure
> I like this -
>
> and in any case, it goes against the design goal of "lightweight". If
> you want to implement only TLS (for example, in a closed private
> network where all parties are known to be using the same version of
> the same protocol), why should you have to lug around SSL as well? I
> suppose I /want/ the solution to be "allow the toolkit to generate
> either SSL-only, or TLS-only, or SSL+TLS" ... but what I'm not sure
> about is, is the "TLS-only" option forbidden by the standard?
No, but it's like this: 

There's no way to advertise that you speak "only TLS 1.0".
So, if, for instance, a client says "I speak TLS 1.0" then
the server is well within its rights to say "thanks, let's to
SSL 3.0". The only thing the client can do is break the connection.
This isn't a disaster since the alternative would be for the
server to break the connection when it discovered that the client
only spoke TLS and it only spoke SSL, but it's a bit inefficient.

> Now, this scenario is all very well for banks and big businesses, but
> I guess I want to do "SSL for the rest of us". You see, the above
> scenario contains a couple of assumptions. It assumes (1) that Bob
> does not already have Alice's key - otherwise why would she need to
> send it? It further assumes (2) that Bob /does/ have Carol's key, /and
> that he trusts Carol/. Okay, fine, but what if these assumptions
> aren't met? I mean, let's assume that Bob already has Alice's
> key. (Let's say for sake of argument that she gave it to him
> personally). Now this means we can save on bandwidth by not having to
> transmit Alice's cert ... but already there are two problems: (1)
> would it be a violation of the protocol to omit the cert?,
Yes.

> and (2)
> without the cert, we would need some /other/ kind of message with
> which to replace it - one which says, simply, "Hi, this is Alice, use
> the copy of my key which you already have". So already I have
> questions - how free am I to allow variations in the handshake?
Not at all. If you want to do this you will have to use the ADH
mode. The alternative is to write a new ciphersuite specification.


> THE CIPHER SUITE
> 
> The list on page 74 of Eric's book looks a little limiting to me - not
> merely because the list is too short, but also because it's very
> design is wrong (in lumping all of the encryption ciphers together
> into a single 16-bit value with no internal structure). What if Alice
> would like to use, say, some elliptic curve function as her asymmetric
> algorithm?, or CAST-5 as her symmetric algorithm?, or SHA-256 as her
> hash function?

This has been an enormously contentious issue. I advise you to tune
into the IPsec mailing list of about 6 months ago for all the
arguments for and against suites. There's no point in reprising
them here. That's just the way TLS is.


> We could maybe fix this up by adding more entries to
> the list, but it's a global list, so who has the authority to add
> entries to it?

You need to submit it to the TLS WG for approval. 

That said, I'm trying to figure out why you care about this. 
The defined algorithms are good enough for almost all purposes.


> I believe that Alice and Bob should be able to
> communicate with whatever ciphers they wish, and should not need the
> permission of any global authority to do this. Are there any values in
> the range (0x to 0x) which are reserved for private use
> between consenting parties?
0xff* is all private.

See A.5 of RFC 2246. You have read the RFC, right?

> It is even possible for Alice and Bob to use a proprietry cipher. For
> example, what if the chosen encryption algorithm is "one-time-pad",
> using a block of bits communicated out of band (e.g. via so-called
> quantum cryptography, or that hard-drive alternative discussed in
> another thread). How can this be communicated in the CipherSuite
> field? I would like to believe there is a way of doing this ... but if
> not, I'd like to know that too, so I can find a neat way of extending
> the protocol to /make/ it possible.
There is an Extensions RFC. Can't remember the RFC number offhand.

> THE COMPRESSION METHOD
> 
> Exactly the same question. Alice and Bob are consenting adults, and
> they want to use the BZIP compression algorithm. I'm not going to tell
> them they can't. (I suspect though that th

Simple SSL/TLS - Some Questions

2003-10-03 Thread Jill Ramonsky
Having been greatly encouraged by people on this list to go ahead with a 
new SSL implementation, it looks like I am going to go for it, but I'd 
kinda like to not make any enemies in the process so I'll try to keep 
this list up to date with progress and decisions and stuff ... and I 
will ask a lot of questions.

It's worth summing up the design goals here, so nobody gets confused. 
Trouble is, I haven't figured out what they should all be. The main 
point of confusion/contention right now seem to be (1) should it be in C 
or C++?, (2) should it support SSL or TLS or both?

There are plenty of things I am really sure about, however. The two main 
design goals people seem to want are (1) lightweight, and (2) easy to 
use. (Plus the "obvious" goals of (3) it actually /will/ implement 
SSL/TLS and not something else, and (4) it shouldn't be full of bugs. I 
figure those go without saying).

Regarding the choice of language, I think I would want this library (or 
toolkit, or whatever) to be somehow different from OpenSSL - otherwise 
what's the point? I mean ... this may be a dumb question, but ... if 
people want C, can they not use the existing OpenSSL? Or is it simply 
that OpenSSL is too complicated to use, so a "simpler than OpenSSL" C 
version is required. What I mean is, I don't want to duplicate effort. 
That seems dumb.

C++ has many advantages, which include SECURITY advantages. Proper use 
of constructors, destructors, exceptions, std::strings, std::vectors, 
smart pointers, and so on can eliminate memory leaks, dangling pointers, 
buffer overruns, and just about everything else than can bring a good 
toolkit down. I already have a very nice, working, C++ secure 
big-integer library [here's one I wrote earlier]. By "secure" in this 
context, I mean that all big-integers get zeroed on deletion, so no 
crypto keys are ever left lying around in memory. Sure, these sort of 
things are all also possible in C, but it's so much more work to be 
/sure/ that every error condition is dealt with. Is embedded C++ 
non-existant then? I'm pretty sure it's possible to compile C++ to C 
(instead of to assembler) so a C++ to C wrapper can't be that difficult. 
(By contrast, a C to C++ wrapper would be easier, but the toolkit would 
have more bugs!) My inclination is still to go with C++, and figure out 
a way of turning it into C later if necessary ... but if majority 
opinion says otherwise I'll reconsider.

Now - SSL or TLS - this confuses me. From what I've read in Eric's book, 
SSL version 3.0 or below is called SSL, wheras SSL version 3.1 or above 
is called TLS. Have I misunderstood that? In any case, I note the bit in 
Eric's book (p73 in my edition) where it says "In general, it is 
expected that an implementation speaks all lesser versions" ... even if 
lesser versions become known to be insecure. I'm not sure I like this - 
and in any case, it goes against the design goal of "lightweight". If 
you want to implement only TLS (for example, in a closed private network 
where all parties are known to be using the same version of the same 
protocol), why should you have to lug around SSL as well? I suppose I 
/want/ the solution to be "allow the toolkit to generate either 
SSL-only, or TLS-only, or SSL+TLS" ... but what I'm not sure about is, 
is the "TLS-only" option forbidden by the standard?

And now some questions about SSL/TLS itself

THE HANDSHAKE PHASE

The assumption in Eric's book, roughly translated into Alice and Bob 
scenarios, goes something like this: Bob (client) says hello to Alice 
(server). Alice sends Bob her certificate (which is basically a copy of 
her public key, signed by a third party, Carol). Bob validates Alice's 
key (which is only possible if he already has a copy of Carol's public 
key), and then uses Bob's (now validated) public key to start sending 
encrypted messages. (There's more, but that's the important part).

Now, this scenario is all very well for banks and big businesses, but I 
guess I want to do "SSL for the rest of us". You see, the above scenario 
contains a couple of assumptions. It assumes (1) that Bob does not 
already have Alice's key - otherwise why would she need to send it? It 
further assumes (2) that Bob /does/ have Carol's key, /and that he 
trusts Carol/. Okay, fine, but what if these assumptions aren't met? I 
mean, let's assume that Bob already has Alice's key. (Let's say for sake 
of argument that she gave it to him personally). Now this means we can 
save on bandwidth by not having to transmit Alice's cert ... but already 
there are two problems: (1) would it be a violation of the protocol to 
omit the cert?, and (2) without the cert, we would need some /other/ 
kind of message with which to replace it - one which says, simply, "Hi, 
this is Alice, use the copy of my key which you already have". So 
already I have questions - how free am I to allow variations in the 
handshake?

THE CIPHER SUITE

The list on page 74 of Eric's book looks a little 

using SMS challenge/response to secure web sites

2003-10-03 Thread Ian Grigg
Merchants who *really* rely on their web site being
secure are those that take instructions for the
delivery of value over them.  It's a given that they
have to work very hard to secure their websites, and
it is instructive to watch their efforts.

The cutting edge in making web sites secure is occuring
in gold community and presumably the PayPal community (I
don't really follow the latter).  AFAIK, this has been
the case since the late 90's, before that, some of the
European banks were doing heavy duty stuff with expensive
tokens.

e-gold have a sort of graphical number that displays
and has to be entered in by hand [1].  This works against
bots, but of course, the bot writers have conquered
it somehow.  e-gold are of course the recurrent victim
of the spoofers, and it is not clear why they have not
taken serious steps to protect themselves against
attacks on their system.

eBullion sell an expensive hardware token that I have
heard stops attacks cold, but suffers from poor take
up because of its cost [2].

Goldmoney relies on client certs, which also seems
to be poor in takeup.  Probably more to do with the
clumsiness of them, due to the early uncertain support
in the browser and in the protocol.  Also, goldmoney
has structured themselves to be an unattractive target
for attackers, using governance and marketing techniques,
so I expect them to be the last to experience real tests
of their security.

Another small player called Pecunix allows you to integrate
your PGP key into your account, and confirm your nymity
using PGP signatures.  At least one other player had
decided to try smart cards.

Now a company called NetPay.TV - I have no idea about
them, really - have started a service that sends out
a 6 digit pin over the SMS messaging features of the
GSM network for the user to type in to the website [4].

It's highly innovative and great security to use a
completely different network to communicate with the
user and confirm their nymity.  On the face of it,
it would seem to pretty much knock a hole into the
incessant, boring and mind-bogglingly simple attacks
against the recommended SSL web site approach.

What remains to be seen is if users are prepared to
pay 15c each time for the SMS message.  In Europe,
SMS messaging is the rage, so there won't be much
of a problem there, I suspect.

What's interesing here is that we are seeing the
market for security evolve and bypass the rather
broken model that was invented by Netscape back in
'94 or so.  In the absence of structured, institutional,
or mandated approaches, we now have half a dozen distinct
approaches to web site application security [4].

As each of the programmes are voluntary, we have a
fair and honest market test of the security results [5].

iang



[1]  here's one if it can be seen:
https://www.e-gold.com/acct/gen3.asp?x=3061&y=62744C0EB1324BD58D24CA4389877672
Hopefully that doesn't let you into my account!
It's curious, if you change the numbers in the above
URL, you get a similar drawing, but it is wrong...

[2] All companies are .com, unless otherwise noted.

[3] As well as the activity on the gold side, there
are the adventures of PayPal with its pairs of tiny
payments made to users' conventional bank accounts.


[4]  Below is their announcement, for the record.

[5]  I just thought of an attack against NetPay.TV,
but I'll keep quiet so as not to enjoy anyone else's
fun :-)

== 
N E T P A Y. T V N E W S L E T T E R 
October 3rd, 2003 
Sent to NetPay members only, removal instructions at the
end of the message 
==
1. SMS entry - Unique Patent pending entry system -
World first! 
==

http://www.netpay.tv/news.htm 

 

What is this new form of entry? 

 

Do you own a mobile phone? Can you receive SMS
messages? Would you like to have your own personal
NetPay security officer contact you when entry to your
account is required? Netpay would like to introduce a world
first in account security. This new feature is so simple, yet
so effective - we believe every member will utilize it. 

 

If you answered yes to the above, then your SMS capable
mobile is a powerful security device, which will stop any
unforced attempts of entry into your Netpay account. No
need to purchase expensive security token hardware, no
need to be utterly confused on how to use the security
device. If you know how to use your mobile, then you know
how to totally protect your Netpay account from any
possible unlawful entry. 

 

This new system sends you an automated 6 digit secure
random PIN direct to your phone whenever you try to
access your account. Without this PIN, it is impossible to
login. The PIN arrives direct to your mobile within seconds!
It is as good as having your own personal security officer
calling you whenever someone is trying to access your
account! 

 

SMS AUTHENTICATED SEC

hackers have broken into GPRS billing

2003-10-03 Thread Steve Schear
Some time today (October 2th), the GPRS world will reveal that it has a 
security vulnerability which has seen an undisclosed number of its 
customers ripped off. They've been trapped into connecting to malicious 
content servers, by hackers penetrating the billing system. The first 
international phone company to admit that they have installed a solution - 
one offered by Check Point - will be the German phone provider, E-Plus.

The scam is called "the over-billing attack." It works quite simply because 
of a link from the Internet world - unregulated - to the normally tightly 
regulated GSM planet. "Network administrators face an exponential onslaught 
of attacks that to date have traditionally been confined to the world of 
wire line data," was the summary from Check Point.

There are lots of potential issues, but the one which has forced the phone 
networks to acknowledge that there is a problem, is a scam where a company 
obtains IP addresses that the GPRS operators own, in the "cellular pool" 
and start pinging those addresses. When one of them responds, the scam 
operator knows that a user has been assigned the address. And, 
unbelievably, there was nothing to stop them simply providing services 
direct to that IP address - and taking the money out of the GPRS billing 
system to pay for it. The network, typically, only found out about the 
attack weeks later, when the angry customer queried the service provided, 
and insisted that they had not signed up for it.

Getting the IP address list costs the crook no more than it takes to log 
onto the GPRS network with a data call, and getting assigned an address by 
a perfectly standard DHCP server inside the operator's network.

Check Point hasn't revealed specifics of how it blocks this attack, but the 
solution is based on its Firewall-1 software, which is already installed in 
most cellular networks. "The problem could be fixed by changing the 
hardware," said a spokesman for Check Point. "But that would take a year to 
implement, and would require hardware changes in virtually every network 
operator's equipment. The alternative is to use the knowledge in the GPRS 
firewall to implement an action in the IP firewall."

The solution does require the operator to run Firewall-1 on its Internet 
equipment as well as its GPRS servers. Once that is in place, Checkpoint 
has a single mnagement architecture for all its firewalls. "Our preferred 
solution is to write a rule that says: 'I have now closed this session on 
my GPRS side, so tell the IP firewall to look for any IP sessions with this 
IP address, and close them'," said a Check Point executive. Check Point 
expects several other announcements from phone network operators in the 
coming weeks.

The problem isn't limited to GPRS. Any mobile network that is internally 
trusted - and that includes next-level technology like UMTS 3G networks - 
will face similar threats when linking its internal, trusting network to 
the free-for-all that is the Internet, and will have to adopt similar 
solutions, says Check Point. "The vulnerability also applies between data 
networks. The GPRS Transfer Protocol, GTP, provides no security to protect 
the communications between GPRS networks," says the company in its sales 
blurbs. "So the GPRS/UMTS network is at risk, both from its own 
subscribers, and from its partner networks." Details from Check Point itself



steve

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Anton Stiglic

- Original Message - 
From: "Tim Dierks" <[EMAIL PROTECTED]>

>
> I think it's a tautology: there's no such thing as MITM if there's no such
> thing as identity. You're talking to the person you're talking to, and
> that's all you know.

That seems to make sense.   In anonymity providing systems often you
want one side to be anonymous, and the other to identify itself (like in
anonymous web surfing).  In this case, if you are using DH to exchange
keys, what you want is something like half-certified DH (see for example
section 2.3 of [1]), where the web server authenticates itself.  With half
certified DH, Alice (the user that is browsing in my example) can be
assured that she is really talking to Bob (web server she wanted to
communicate with), and not a MITM.


[1] http://crypto.cs.mcgill.ca/~stiglic/Papers/dhfull.pdf

--Anton



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Monoculture

2003-10-03 Thread Don Davis
>>> Is it possible for Bob to instruct his browser to
>>> (b) to trust Alice's certificate (which she handed
>>> to him personally)? (And if so, how?)

>> how it's done depends on the browser:
>> in MSIE 5:  (there seems to be no way to tell MSIE 5 to
>>  trust Alice's server cert for SSL connections,
>>  except to tell MSIE 5 to trust Alice's CA.)

> This seems to me to a /serious/ flaw in the design of MSIE. 

well, before dismissing MSIE's cert-mgt completely,
you should check whatever version of MSIE ships on
Win2k/XP .  MSIE 5 is all i have on my mac, because
i very rarely use MSIE.

but, if you want to complain about MSIE's security
features, you'll have to take a number and wait in
line...  B^(

- don davis, boston









-

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


DH with shared secret

2003-10-03 Thread Jack Lloyd
This was just something that popped into my head a while back, and I was
wondering if this works like I think it does. And who came up with it
before me, because it's was too obvious. It's just that I've never heard of
something alone these lines before.

Basically, you share some secret with someone else (call it S).  Then you
do a standard issue DH exchange, but instead of the shared key being
g^(xy), it's g^(xyS)

My impression is that, unless you know S, you can't do a succesfull MITM 
attack on the exchange. Additionaly, AFAICT, it provides PFS, since if 
someone later recovers S, there's still that nasty DH exchange to deal 
with. Of course after S is known MITM becomes possible.

Given the recent climate around here, I'll add that I'm not planning on
using this for anything (I only use TLS, I swear! :P), I just thought it
was an semi-interesting idea.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread bear


On Thu, 2 Oct 2003, Zooko O'Whielacronx wrote:

>
>> Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
>> which are on my shelf.  Where was it published?


> R. L. Rivest and A. Shamir. How to expose an
> eavesdropper. Communications of the ACM, 27:393-395, April 1984.

Ah.  Interesting, I see. It's an interesting application of a
bit-commitment scheme.

Hmmm.  The key to this is that synchronous communications have to
happen.  When it's your turn to move, you create a message that gives
the move, then pad it to some unsearchable length, encrypt, and send
half.  MITM can't tell what the move is without seeing the second
half, so either has to make something up and send half of that, or
just transmit unchanged.  The second half is sent by each player when
the first half has been recieved, and includes a checksum on the first
half that was actually recieved.

Mitch hast the choice of playing his own game of bughouse against each
of the contestants, which just turns him into a third contestant.  Or
he has the choice of allowing the first two contestants to complete
their game without interference.

Why should this not be applicable to chess?  There's nothing to
prevent the two contestants from making "nonce" transmissions twice a
move when it's not their turn.

Bear


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: quantum hype

2003-10-03 Thread Peter Fairbrother
[EMAIL PROTECTED] wrote:

>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] Behalf Of Dave Howe
>> 
>> Peter Fairbrother may well be in possession of a break for the QC hard
>> problem - his last post stated there was a way to "clone" photons with
>> high accuracy in retention of their polarization
>> [SNIP]
>> 
> Not a break at all. The physical limit for cloning is 5/6ths of the bits will
> clone true. Alice need only send 6 bits for every one bit desired to assure
> Eve has zero information. For a 256-bit key negotiation, Alice sends 1536 bits
> and hashes it down to 256 bits for the key.

I've just discovered that that won't work. Eve can get sufficient
information to make any classical error correction or entropy distillation
techniques unuseable.

See:  http://www.gap-optique.unige.ch/Publications/Pdf/9611041.pdf


You have to use QPA instead, which has far too many theoretical assumptions
for my trust.

-- 
Peter Fairbrother

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Monoculture / Guild

2003-10-03 Thread John Gilmore
> ... it does look very much from the outside that there is an
> informal "Cryptographers Guild" in place...

The Guild, such as it is, is a meritocracy; many previously unknown
people have joined it since I started watching it in about 1990.

The way to tell who's in the Guild is that they can break your protocols
or algorithms, but you can't break theirs.

While there are only hundreds of serious members of the Guild -- a
comfortable number for holding conferences on college campuses -- I
think just about everyone in it would be happier if ten times as many
people were as involved as they are in cryptography and security.
Then ten times as many security systems that everybody (including the
Guild members) depends on would be designed properly.  They certainly
welcomed the Cypherpunks to learn (and to join if they were serious
enough).

I consider myself a Guild Groupie; I don't qualify but I think
they're great.  I follow in their footsteps and stand on their shoulders.

Clearly there are much larger numbers of Guild Groupies than Guild
members, or Bruce Schneier and Neal Stephenson wouldn't be able to
make a living selling books to 'em.  :-)

John

PS: Of course there's whole set of Mystic Secret Guilds of
Cryptography.  We think our openness will defeat their closedness,
like the free world eventually beat the Soviet Union.  There are some
good examples of that, such as our Guild's realization of the
usefulness of public-key crypto (we reinvented independently, but they
hadn't realized what a revolutionary concept they already had).  Then
again, they are better funded than we are, and have more exemptions
from legal constraints (e.g. it's hard for us to do production
cryptanalysis, which is really useful when learning to design good
cryptosystems).

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread Zooko O'Whielacronx

> Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
> which are on my shelf.  Where was it published?

R. L. Rivest and A. Shamir. How to expose an eavesdropper. Communications of the ACM, 
27:393-395, April 1984.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Protocol implementation errors

2003-10-03 Thread Bill Frantz
From:

> -- Security Alert Consensus --
>   Number 039 (03.39)
>  Thursday, October 2, 2003
>Network Computing and the SANS Institute
>  Powered by Neohapsis
>
>*** {03.39.004} Cross - OpenSSL ASN.1 parsing vulns
>
>OpenSSL versions 0.9.6j and 0.9.7b (as well as prior) contain multiple
>bugs in the parsing of ASN.1 data, leading to denials of services. The
>execution of arbitrary code is not yet confirmed, but it has not been
>ruled out.

This is the second significant problem I have seen in applications that use
ASN.1 data formats.  (The first was in a widely deployed implementation of
SNMP.)  Given that good, security conscience programmers have difficultly
getting ASN.1 parsing right, we should favor protocols that use easier to
parse data formats.

I think this leaves us with SSH.  Are there others?

Cheers - Bill


-
Bill Frantz| "There's nothing so clear as   | Periwinkle
(408)356-8506  | vague idea you haven't written | 16345 Englewood Ave
www.pwpconsult.com | down yet." -- Dean Tribble | Los Gatos, CA 95032


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: anonymous DH & MITM

2003-10-03 Thread bear


On Thu, 2 Oct 2003, Zooko O'Whielacronx wrote:

> I understand the objection, which is why I made the notion concrete
> by saying that Mitch wins if he gets the first player to accept the
> second player's move.  (I actually think that you can have some
> notion of "credit" -- for example a persistent pseudonym linked to a
> longer-term public key, but that isn't necessary to appreciate the
> current challenge.)

Wait.  That's not anonymity, that's pseudonymity.  And yes, you can
have pseudonymous open protocols that are immune to MITM.  My
contention was that you can't have anonymous open protocols that are
immune to MITM.

> Right.  I proposed that the first player send a public key even
> though the second player has no way to authenticate it.  The effect
> of this is that Mitch can no longer act as a purely passive proxy
> (i.e., he can't act like an Eve), because if he does the second move
> will be encrypted so that he can't read it.  Oh -- whoops!  This
> doesn't suffice to deter Mitch from acting as a passive proxy, since
> we didn't specify that he had to actually see the second move in
> order to win.  Maybe we should add the requirement that for Mitch to
> win he has to know what the second player's move was.

Okay, so the keypair is fresh-made and we are talking about an
anonymous protocol.  In that case Alice can't tell Mitch's key from
Bob's key and Bob can't tell Mitch's key from Alice's.

>> > starting with Rivest & Shamir's Interlock Protocol from 1984.
>>
>> Hmmm.  I'll go read, and thanks for the pointer.

Perhaps I spoke too soon?  It's not in Eurocrypt or Crypto 84 or 85,
which are on my shelf.  Where was it published?

Bear

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Monoculture

2003-10-03 Thread Victor . Duchovni
On Thu, 2 Oct 2003, Thor Lancelot Simon wrote:

> 1) Creates a socket-like connection object
>
> 2) Allows configuration of the expected identity of the party at the other
>end, and, optionally, parameters like acceptable cipher suite
>
> 3) Connects, returning error if the identity doesn't match.  It's
>probably a good idea to require the application to explicitly
>do another function call validating the connection if it decides to
>continue despite an identity mismatch; this will avoid a common,
>and dangerous, programmer errog.
>
> 4) Provides select/read operations thereafter.
>

Speaking as a Postfix developer, it would be very useful to have a
non-blocking interface that maintained an event bitmask and
readable/writable callbacks for the communications channel, allowing a
single-threaded application to get other work done while a TLS negotiation
is in progress, or to gracefully time out the TLS negotiation if progress
is too slow. This means that the caller should be able to tear down the
state of a partially completed connection at any time without memory leaks
or other problems.

-- 
Victor Duchovni
IT Security,
Morgan Stanley

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]