Re: Comments/summary on unicity discussion

2003-03-09 Thread Joseph Ashwood
- Original Message -
From: Joshua Hill [EMAIL PROTECTED]
Subject: Re: Comments/summary on unicity discussion


 It doesn't deal with plaintext, just ciphertext.  In fact, unicity
 distance is only valid for a ciphertext only attack.  Once you get a
 known plaintext/ciphertext pair, a high unicity distance works against
 you (more on this later). In addition, it is isn't certain that after
 observing the requisite unicity distance number of ciphertext units that
 you can uniquely determine the key, it is merely very likely.

There appears to be an error in there. The Unicity Distance has a very
strong correlation with the uncertainty of the plaintext (entropy per
message). By having access to the plaintext/ciphertext pair (often it takes
multiple pairs), this removes all uncertainty as to the plaintext, this
changes the unicity distance calculation by making the unicity distance as
short as possible, which would make Once you get a known
plaintext/ciphertext pair, a high unicity distance works against you
Seem more than a little odd as a statement.

On K complexity, while K complexity offers a convenient, if somewhat
inaccurate, upperbound of the entropy, that is basically where the
relationship ends. Permit me to give the basic example. Which of these
strings has higher entropy:
kevsnblawtrlnbatkb
kevsnblawtrlnbatkb
One was created by slapping my hands on the keyboard, and so contains some
entropy, the other was created through copy and paste, and so contains none.
However the K complexity of the two is identical. The portion of the
equation you are forgetting is that the key to the pRNG may itself be
compressible. This leads to somewhat of a logic loop, but at the end of it
is the absolute smallest representation, as a compression of a given
language (the only sense in which this makes sense).
Joseph Ashwood


Trust Laboratories
http://www.trustlaboratories.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Key Pair Agreement?

2003-01-21 Thread Joseph Ashwood
- Original Message -
From: Jeroen C. van Gelderen [EMAIL PROTECTED]
 Here is a scenario: Scott wants Alice to generate a key pair after
 which he will receive Alice's public key. At the same time, Scott wants
 to make sure that this key pair is newly generated (has not been used
 before).
[snip]
 It would seem that the DSA key structure facilitates this:

 1. Scott sends SEED1 to Alice.
 2. Alice picks a random number SEED2.
 3. Alice sets SEED=SHA1(SEED1 || SEED2).
 4. Alice generates a set of DSA parameters P, Q, G using the
 algorithm in Appendix 2, FIP-186-2.
 5. Alice generates a key pair (x,y) using the parameters from (4).
 6. Alice sends SEED2, counter, P, Q, G, y to Scott.
 7. Scott generates P', Q', G' based on SEED=SHA1(SEED1 || SEED2),
 counter, and compares them to P, Q, G.

 This is a very expensive key generation operation but it would
 seem to work.

 My questions are:
 0) who has invented this before?
 1) does it achieve what I think it achieves?
 2) does anybody know of more efficient algorithms?

I can think of a more efficient algorithm off the top of my head.
1. Scott creates P,Q,G
2. Scott sends P,Q,G
3. Alice verifies P,Q,G and creates (x,y)
4. Alice sends (x,y)

Scott can be certain that, within bounds, P,Q,G have never been found
before, and it vastly reduces the computations (since there was no stated
requirement that Alice believe the pair had never been used before).

Now on to Jack's question:
 Actually, that makes me wonder. Given:
   y_i = (g_i^x) mod p_i for i 0...n

 An you find x easier than you would with just y=g^x mod p? Obviously it
 couldn't be any harder, but I wonder if there is any practical advantage
 for an attacker there.

I believe it is unfortunately so, depending on how strong P is. If (P-1)/2Q
is always prime, then I believe there is little erosion (but some remains).
The foundation of this thus.
X can be determined from Y if the attacker can find inverses in all the sub
groups, by the CRT.
I believe there is a way to extend this such that provided you have enough
sub-groups to uniquely identify a number  Y, X can be determined.
If this is the case then having a large number of (Y,P,G,Q) around can erode
the security of the DLP.
I also believe that it is the case that each of the factors must be unique,
otherwise it is simply a collision in the CRT context (although a change of
G may correct this)
If this is the case then it would be important that Alice generate a new X
for each pair in existence, which is simply good policy to begin with.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-10 Thread Joseph Ashwood

- Original Message -
From: Roy M.Silvernail [EMAIL PROTECTED]
 And here, I thought that a portion of the security embodied in a SecurID
 token was the fact that it was a tamper-resistant, independent piece of
 hardware.  Now M$ wants to put the PRNG out in plain view, along with its
 seed value. This cherry is just begging to be picked by some blackhat,
 probably exploiting a hole in Pocket Outlook.

Unfortunately, SecurID hasn't been that way for a while. RSA has offered
executables for various operating systems for some time now. I agree it
destroys what there was of the security, and reduces it to basically the
level of username/password, albeit at a more expensive price. But I'm sure
it was a move to improve their bottom line.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
  The important part for this, is that TCPA has no key until it has an
owner,
  and the owner can wipe the TCPA at any time. From what I can tell this
was
  designed for resale of components, but is perfectly suitable as a point
of
  attack.

 If this is true, I'm really happy about it, and I agree it would allow
 virtualisation. I'm pretty sure it won't be for Palladium, but I don't
 know about TCPA - certainly it fits the bill for what TCPA is supposed
 to do.

I certainly don't believe many people to believe me simply because I say it
is so. Instead I'll supply a link to the authority of TCPA, the 1.1b
specification, it is available at
http://www.trustedcomputing.org/docs/main%20v1_1b.pdf . There are other
documents, unfortunately the main spec gives substantial leeway, and I
haven't had time to read the others (I haven't fully digested the main spec
yet either). From that spec, all 332 pages of it, I encourage everyone that
wants to decide for themselves to read the spec. If you reach different
conclusions than I have, feel free to comment, I'm sure there are many
people on these lists that would be interested in justification for either
position.

Personally, I believe I've processed enough of the spec to state that TCPA
is a tool, and like any tool it has both positive and negative aspects.
Provided the requirement to be able to turn it off (and for my preference
they should add a requirement that the motherboard continue functioning even
under the condition that the TCPA module(s) is/are physically removed from
the board). The current spec though does seem to have a bend towards being
as advertised, being primarily a tool for the user. Whether this will remain
in the version 2.0 that is in the works, I cannot say as I have no access to
it, although if someone is listening with an NDA nearby, I'd be more than
happy to review it.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)

2002-08-15 Thread Joseph Ashwood

This is going to be a very long, and very boring message. But it should
highlight why we have differing opinions about so very many capabilities of
the TCPA system. For the sake of attempting to avoid supplying too little
information, I have simply searched for the term and will make comments on
each location that it appears.

- Original Message -
From: Adam Back [EMAIL PROTECTED]
 Phew... the document is certainly tortuous, and has a large number of
 similarly and confusingly named credentials, certificates and keys,
 however from what I can tell this is what is going on:

I wholeheartedly agree. 332 pages to say 5 pages worth of real information
is not helpful.


 Summary: I think the endorsement key and it's hardware manufacturers
 certificate is generated at manufacture and is not allowed to be
 changed.
[Search criteria endorsement]
While I haven't found any solid evidence either way, they seem to almost
deliberately avoid that discussion on Page 22 I found a fatal errorcode
TCPA_NO_ENDORSEMENT at TCPA_BASE+35 The TPM does not a EK installed
attempting to interpret the bad grammar, I believe this should state The
TPM does not [have] an [Endorsement Key] installed which seems to indicate
that the platform may ship without one.

On page 35 the endorsement key is listed as persistent data. Which at first
would indicate that the endorsement key happens before shipping, but since
there is also an RNG state variable stored persistently, my confidence in
this is undermined. Adding to the complications, down near the end of the
page, in the table it says This is the TPM's endorsement key pair. See 9.2.
The default value is manufacturer-specific which indicates that it does
ship with an endorsement key, but that the key can be changed by the owner.

Page 38, the existance of the CBKPUsed flag hints that the endorsement key
pair need not always be present. Unfortunately the spec goes on to say
NOTE: This flag has no default value as the key pair MUST be created by one
or the other mechanism. Which certainly confuses things.

Page 41 TPM_CreateEndorsementKey may be called before TPM_Startup. This is
necessary because TPM_Startup will fail unless an endorsement key exists is
of no help either way. As with all the others, it states that there may
exist conditions where the EK may not exist, but does not give any hints
whether this is before or after the TPM leaves the plant.

On page 79, the EK is metioned twice. The first time if useless for our
purpose. The second time states This SHALL be the TPM endorsement
credential which indicates that an endorsement credential must exist. Other
locations though seem to hint that a void endorsement credential may be
possible.

Starting on Page 84 is section 4.32.1, which seems to be as close to an
authority on the EK as possible, but lacks a statement of whether the EK is
shipped with or added later. It does however clearly indicate that the
creation of the EK occurs before the Privacy CA is contacted, which was
already agreed on.

[somewhere around here I stopped addressing everyt occurance of the word
endorsement because most of them are frivolous]

Page 135, Section 5.11.1, clearly states The new owner MUST encrypt the
Owner authorization data and the SRK authorization data using the PUBEK.
Which clearly indicates that the EK must exist before ownership can be
taken. Other places have hinted that ownership may be taken and then the EK
updated, which completely contradicts the one-timeness, or this statement.

Page 135 If no EK is present the TPM MUST return TCPA_NO_ENDORSEMENT which
indicates that one can at least attempt to take ownership before an EK is
present, which would contradict the requirement that the EK come from the
factory.

Page 178, Section 7.3 I am only mentioning because it presents a rather
interesting possibility. It hints that under some circumstances it may be
acceptable for a manufacturer to copy the data from one TCPA to another.
This portion begins with The manufacturer takes the maintainance blob . .
. This may however only be to update an existing one to address flaws or
meet new capabilities.

Page 183, hints that even the manufacturer is not allowed to known EK public
key, which complicates things no end, because the Privacy CA certainly
cannot at that point be permitted to view it. This would indicate that even
if the EK is shipped with the system, it can never leave the system. This
would limit the ability of the EK to simply certifying the owner, if that is
true then it confuses me even further.

Page 213 section 8.10 clearly states that if the owner clears the TCPA,
everthing is cleared except the endorsement key pair. Which would indicate
that this is truly a one-shot deal.

Page 240, states This is a dead TPM. It has failed it's startup smoke test.
It should not leave the factory floor. This indicates that the EK must be
created before the TPM leaves the factory.

Section 9.2, page 261, states that TPM_CreateEndorsementKeyPair 

Re: Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
 Joseph Ashwood wrote:
  There is nothing stopping a virtualized version being created.

 What prevents this from being useful is the lack of an appropriate
 certificate for the private key in the TPM.

Actually that does nothing to stop it. Because of the construction of TCPA,
the private keys are registered _after_ the owner receives the computer,
this is the window of opportunity against that as well. The worst case for
cost of this is to purchase an additional motherboard (IIRC Fry's has them
as low as $50), giving the ability to present a purchase. The
virtual-private key is then created, and registered using the credentials
borrowed from the second motherboard. Since TCPA doesn't allow for direct
remote queries against the hardware, the virtual system will actually have
first shot at the incoming data. That's the worst case. The expected case;
you pay a small registration fee claiming that you accidentally wiped your
TCPA. The best case, you claim you accidentally wiped your TCPA, they
charge you nothing to remove the record of your old TCPA, and replace it
with your new (virtualized) TCPA. So at worst this will cost $50. Once
you've got a virtual setup, that virtual setup (with all its associated
purchased rights) can be replicated across an unlimited number of computers.

The important part for this, is that TCPA has no key until it has an owner,
and the owner can wipe the TCPA at any time. From what I can tell this was
designed for resale of components, but is perfectly suitable as a point of
attack.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Seth on TCPA at Defcon/Usenix

2002-08-10 Thread Joseph Ashwood

- Original Message -
From: AARG! Anonymous [EMAIL PROTECTED]
[brief description of Document Revocation List]

Seth's scheme doesn't rely on TCPA/Palladium.

Actually it does, in order to make it valuable. Without a hardware assist,
the attack works like this:
Hack your software (which is in many ways almost trivial) to reveal it's
private key.
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
problem solved

With hardware assist, trusted software, and a trusted execution environment
it (doesn't) work like this:
Hack you software.
DOH! the software won't run
revert back to the stored software.
Hack the hardware (extremely difficult).
Virtualize the hardware at a second layer, using the grabbed private key
Hack the software
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
Once the file is released the server revokes all trust in your client,
effectively removing all files from your computer that you have not
decrypted yet
problem solved? only for valuable files

Of course if you could find some way to disguise which source was hacked,
things change.

Now about the claim that MS Word would not have this feature. It almost
certainly would. The reason being that business customers are of particular
interest to MS, since they supply a large portion of the money for Word (and
everything else). Businesses would want to be able to configure their
network in such a way that critical business information couldn't be leaked
to the outside world. Of course this removes the advertising path of
conveniently leaking carefully constructed documents to the world, but for
many companies that is a trivial loss.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: deterministic primality test

2002-08-09 Thread Joseph Ashwood

[I've got some doubts about the content here but I think the
discussion is certainly on charter --Perry]

Since I have received a number of private replies all saying approximately
the same thing; lookup for small n, use algo for large. Allow me to extend
my observation.
To quote myself from earlier:
 1: if (n is of the form a^b, b1) output COMPOSITE;
 2: r=2
 3: while(r  n) {
 4:if(gcd(n,r)=/=1) output COMPOSITE;
 5:if(r is prime)
 6:let q be the largest prime factor of r-1;

n = prime number  2
1: n is not of the form (we already know it is prime)
2: r=2
3: 2  3
4: gcd = 1 (since N is prime)
5: 2 is prime
6: q = ?
r=2
q = largest prime factor of (2-1) = largest prime factor of 1. 1 has no
prime factors, since 1 is by definition not prime, but has no integer
divisors except 1.

So the algorithm cannot be executed on any prime value with the exception of
2 (which it correctly states is prime). It is possible that the algorithm is
simply incomplete. The apparent intension is:
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1

A few additional observations under the assumption that this is the desired
statement:
The algorithm is equivalent to (I've left the numbering for the original
instruction order:
1: if (n is of the form a^b, b1) output COMPOSITE;
2: r=2
3: while(r  n) {
5:if(r is prime)
4:if(gcd(n,r)=/=1) output COMPOSITE;
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1
7.if(q = 4sqrt(r)log(n)) and (n^((r-1)/q) =/= 1 (mod r))
8break
9:r -r+1
proving this is trivial. Since r is being stepped sequentially the only new
results will occur when r is prime (otherwise gcd is not dependable), this
will reduce the number of calls to gcd, and so reduce the asymptotic time.
This is obviously bounded by the time it takes to check if r is prime.
However note that we can now replace it conceptually with, without changing
anything:
1: if (n is of the form a^b, b1) output COMPOSITE;
2: r=2
3: while(r  n) {
4:if(gcd(n,r)=/=1) output COMPOSITE;
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1
7.if(q = 4sqrt(r)log(n)) and (n^((r-1)/q) =/= 1 (mod r))
8break
9-2:r -nextprime(r)

The asymptotic time is now bounded primarily by the runningtime of
nextprime(), the current best running time for this is not polynomial (but
there are better ways than their step by 1, prime check method). In fact
assuming that the outer while loop is the primary exit point (which is
certainly true for the worst case), the algorithm presented takes O(n) time
(assuming binary notation, where n is the function input n), this is
equvalent to the convential notation O(2^m), where m is the length of the
input. The best case is entirely different. Best case running time is
O(gcd()), which is polynomial. Their claim of polynomial running time is
certainly suspect. And over the course of the entire algorithm, the actual
running time is limited by a function I've dubbed listAllPrimesLessThan(n),
which is well studied and the output has exponential length, making such a
function strictly exponential.

Additionally it has been noted that certain composite values may pass the
test, regardless of the claim of perfection. It is unlikely that this
function is of much use.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-27 Thread Joseph Ashwood

- Original Message -
From: John S. Denker [EMAIL PROTECTED]
To: David Honig [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; 'Hannes R. Boehm' [EMAIL PROTECTED]; 'Ian
Hill' [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, July 27, 2002 10:52 AM
Subject: Re: building a true RNG


 I wrote:
   a) if the hash function happens to have a property I call no
  wasted entropy then the whitening stage is superfluous (and
  you may decide to classify the hash as non-simple);

I disagree. The whitening stage can perform a very important step of
allowing for stretching of the entropy. Ideally a true RNG would be fast
enough to supply all our needs, but that typically doesn't happen. Instead
we have to depend on stretching tricks, basically using a primitive in
counter mode, to stretch the bits we have as far as is needed. This can be
combined with the distillation stage itself by inserting a fixed string,
copying the state, and finishing the calculation, but can conceptually be
thought of as a seperate stage.

 David Honig responded:
  Not wasting entropy does not mean that a function's output
  is white ie uniformly distributed.  E.g., a function
  which doubles every bit: 0-00, 1-11 preserves total
  entropy but does not produce uniformly distributed output.
  (It also reduces entropy/symbol.)

 That's a red-herring tangent.  I'm not talking about any old
 function that doesn't waste entropy;  I'm talking about a
 !!hash!! function that doesn't waste entropy.

A construct that simply does not exist, more on this later.

 And remember:  in addition to having a non-entropy-wasting hash
 function, we are also required to saturate its input.  Then we can
 conclude that the output is white to a very high degree, as
 quantitatively discussed in the paper.




  The simple-hash's --lets call it a digest-- function is to increase the
  entropy/symbol
  by *reducing* the number of symbols while preserving *total* entropy.

I seperated this because I agree with teh statement completely, that is the
one and only function of the distilling function, whether it is a hash, a
cipher, or a miracle construct.


 Total entropy is preserved in the non-saturated regime.  This is
 documented in upper rows of the table:
   http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#tab-saturation

I disagree. There are several ways of viewing the problem. Modelling the
distillation function as a perfect entropy smoother over k bits (other than
that I will be assuming a worst case). We see the following happen:
When the first bit is inserted, each of the k bits receives 1/k entropy
When the second bit of entropy is inserted one of those bits received 1 bit
of entropy, the entire system now has 1 + (k-1)/k bits of entropy, already
there is a small but measurable discrepancy
Each bit now has (2k-1)/k bits of entropy
The third bit results in 1 of the k bits having entropy 1 giving 1+
((2k-1)/k)(k-1)/k bits of entropy in total

While the system slowly loses entropy, it does lose entropy. It's also worth
noting that this agrees with the presented analysis at 3 points, 0 has 0
entropy, 1 has 1-bit of entropy, and inf has k bits, but that for all useful
lengths having k bits of entropy can never occur.

 In the saturated regime, some entropy is necessarily lost.  This is
 documented in the lower rows of the table.  This is only a small
 percentage, but it is mandatory.  I don't consider this to be wasted
 entropy;  I consider it entropy well spent.  That is, these are
 necessary hash collisions, as opposed to unnecessary ones.

These don't even have to be hash collisions, it is possible for the hash
function to discard entropy simply through its behavior, as in my example
where entropy starts being discarded beginning with the second bit. From a
theoretic standpoint, it is entirely possible that a function exists that
discards entropy beginning with the first bit.

  A function like xor(bit N, bit N+1) which halves the number of bits
  can do this.  While its output is whiter than its input, its output
  will not be perfectly white unless its input was.

This statement is true of all deterministic algorithms. In fact it it one of
the points where all presented models agree, you cannot reach a completely
entropic state until you reach infinite lengths for the input. If the input
if purely entropic there is some grey area surrounding this, but for a hash
function to be secure the grey area disappears.

  Two scenarios where SHA is contraindicated came up:
  1. hardware
  2. interrupt handlers
 
  Sometimes a hardware RNG will feed raw data into a crypto-WEAK analogue
of
  SHA,
  ie a LFSR which does the bit-reduction and mixing functions,
  but doesn't mix as well as SHA.  (LFSR are hardware-friendly)
Separating
  the bit-reduction from the mixing can be useful for analyzing what's
  really going on.

 Well, I tend to agree that systems that separate the bit-reduction
 from the mixing are easier to analyze, in the sense that it is
 easier to find 

Re: building a true RNG (was: Quantum Computing ...)

2002-07-24 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]
Subject: Re: building a true RNG (was: Quantum Computing ...)


 I've got a framegrabber with a 640x480 24 bit/pixel camera. It doesn't
 compress, is rather noisy, and since self-adjusting I get the maximum
 entropy at maximum darkness.

 Is there any point in compressing the video before running it through a
 cryptohash?

It will not serve a cryptographic use, however if you can find a fast enough
truly lossless compressor it can be very useful. Since I assume you'll be
taking a picture purely in the dark a usable compressor would be (please
pardon the severely abused pseduo-code)

SHA1 pool

on_pixel
{
if pixel is not black
SHA1_update(pool, pixel, pixel coordinates);
}

get_random()
{
SHA1 temp
SHA1_update(pool, 1)
temp = SHA1_duplicate(pool)
return SHA1_finalize(temp)
}

 How does e.g. SHA-1 fare with very sparse bitvectors?

It is believed to fare quite well, and considering that the line for proper
entropy distillation is actually well below the line for cryptographic
security SHA-1 is likely to remain very good for this purpose. If you are
more concerned about speed than maximum entropy containment (or require less
than 128-bits of entropy) you might also consider MD5. If you are extremely
concerned about this (and are willing to lose a few other desirable
behaviors) it is possible to use a block cipher, basically in CBC mode, to
accumulate entropy, this has the advantage that under some reduced
assumptions it is possible to compute the maximal entropy of the state at a
given time.
Joe



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: authentication protocols

2002-04-01 Thread Joseph Ashwood

- Original Message -
From: John Saylor [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, March 25, 2002 3:14 PM
Subject: authentication protocols

 I'd like to find an authentication protocol that fits my needs:
 1. 2 [automated] parties
 2. no trusted 3rd party intemediary ['Trent' in _Applied_Crypto_]

Depending on your exact requirements, there are many potential options.
Let's start with the most basic:

The parties securely exchange a verifier. There are many variations at this
point, but the basic version is (with details omitted for brevity):
A-B: I am A
B-A: If you are A then you can decrypt this E(K) and use K as our shared
key
A decrypts and now both A and B share a one time secret
This is generally done using symmetric keys

More sophisticated, and scaling much better requires a trust model of some
kind. This does however get very tricky. There has to be some verification
of the key by a 3rd party (which can typically be the same as one of the
first 2 parties). However it is possible to build something usable, as long
as on one occasion you can verify the identity of the other party. This type
of protocol works approximately as so:
B has a verified public key for A
A has a verified public key for B
A-B: S(I am A, our temporary key is E(K), the current time is T)
B verifies signature, and decrypts K, K is now the shared secret
There are of course variations where it is E(S(K...)) instead of
S(...E(K)...)

There are many different variations on this, some patented, some
unencumbered, some that are secure down to the use a an ATM-like PIN, some
that require larger quantities of secret data, some that take 1 pass from A
to B, some that take 10 passes between them, some that have nice provable
reductions, some that don't. It all depends on what your needs are. But all
of these require some trust model, and initial verification of the key is
the problem.

Moving over to situations where you are not forced to perform an initial key
verification requires a trusted 3rd party, which is what you requested to
avoid so I won't introduce them.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]