Cryptography-Digest Digest #487, Volume #9        Sat, 1 May 99 14:13:02 EDT

Contents:
  "Forwards Compatibility" with PGP [previously PhotoID discussion] (David Crick)
  Re: Quantum Computing and Cryptography added to web site... (David A Molnar)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Encrypted Phones ("PJ")
  Re: Commercial PGP for Linux? ("Scott Nelson")
  Re: Quantum Computing and Cryptography added to web site... (Medical Electronics Lab)
  Re: Random Number Generator announced by Intel (Medical Electronics Lab)
  Re: Thought question: why do public ciphers use only simple ops like shift and XOR? 
(Terry Ritter)
  Re: Thought question: why do public ciphers use only simple ops like shift and XOR? 
(Terry Ritter)

----------------------------------------------------------------------------

Date: Fri, 30 Apr 1999 19:26:41 +0100
From: David Crick <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,comp.security.pgp.discuss
Subject: "Forwards Compatibility" with PGP [previously PhotoID discussion]

Tom McCune wrote:
> 
> David Crick <[EMAIL PROTECTED]>
> wrote:
> 
> >Despite the ONLY difference between the photo and non-photo DH/DSS
> >keys being the lack of a photo (i.e. they were the SAME actual
> >key), the Amiga version IMPORTED the photo-key without moaning,
> >HOWEVER, it was reported as being a signing-only.
> 
> Hi David,
> 
> I appreciate your responding with this information.  I hadn't tested
> 5.0 with a photo ID, and now have as a result of your message (using
> Windows 95) - I need to make an update to my PGP pages.

I believe there's also a "PGP Interactions" page (or something like
that) kept by someone else, dealing mainly with the RSA/DH-DSS issue.
Perhaps they need to be notified and update their pages too (I don't
know the URL off-hand).


> While 5.5.x wouldn't import the key at all, MIT 5.0 appears to import
> the key okay.  But, 5.0 will not list the key for encryption; it does
> sign with the key, but verifying produces a Bad Signature message. 
> But when I use PGP 6.0.2 (in this testing I used the CKT build - my
> prior testing was official 6.0.2) to add the photo ID key to my 5.0
> keyrings, the results are the same as when I used 6.0.2 to add the
> photo ID key to 5.5.3 (the key can then be used by 5.0 completely
> normally for encryption/decryption and signing/verification).


> So, for both 5.0 and 5.5.3, the problem with a photo ID key, is
> getting it properly on the keyrings; but once properly on the
> keyrings, it can be used normally.  It is a shame that 5.0 appears
> to properly import the photo ID key - I guess one reason why 5.5.3
> is preferable to 5.0.

I think there is a general issue here (and I'm CC'ing this to the
PGP-users list and sci.crypt as I know NAI folk hang out there):

It would be nice if older versions of PGP didn't fall over so
badly when presented with newer keys, whether we're talking about
RSA/DH-DSS issues, or PGP5/6 ones (and maybe PGP6/6.5.x ones with
the new release, if Twofish is turned on and enabled).

PGP 2.6.x did pretty well with DH/DSS keys - it said "you need a
newer version...". An official 2.6.4 was promised to tidy this
up even more, but it still hasn't happened (although there is
an unofficial 2.6.4ui I believe which does this to an extent).

However, with the newer PGPs, unexpected features (such as
photo IDs with PGP5, or re-enabled/expanded code in CKT with
NAI versions) seem to cause a 'fatal' crash, at least with
PGPtray in Win9x.

It seems to me this is a software engineering / design issue,
although I accept that this is tricky given all the different
versions and variants out there.

   David.

-- 
+-------------------------------------------------------------------+
| David Crick [EMAIL PROTECTED] http://members.tripod.com/~vidcad/ |
| Damon Hill WC96 Tribute: http://www.geocities.com/MotorCity/4236/ |
| M. Brundle Quotes: http://members.tripod.com/~vidcad/martin_b.htm |
| PGP Public Keys: 2048-bit RSA: 0x22D5C7A9 4096-DH/DSS: 0x87C46DE1 |
+-------------------------------------------------------------------+

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Quantum Computing and Cryptography added to web site...
Date: 29 Apr 1999 00:51:21 GMT

Medical Electronics Lab <[EMAIL PROTECTED]> wrote:

> Possibly, but you may have to go to lots more bits than the primary
> field.  There are "field extensions" which build up polynomials of
> polynomials and convert ECC to Z/nZ.  On normal computers this is
> slower than the standard square root method, but it might work on
> a quantum computer.  Once we have them to play with, it will be
> fun to check :-)

Hey, cool. Thanks for pointing this out. I'll go look it up.

Is a simulation of a quantum computer good enough for you ? 
http://www.openqubit.org/

> For every weapon there is a counter, for every counter another weapon.
> Once we have quantum computers, we'll have quantum crypto.  And
> lots of other really cool things too!
 
Oh, I know we have quantum crypto, but that's 
just session key distribution., isn't it?
We can't do quantum bit commitment (unless you assume that neither Alice
nor Bob can do 'enough' coherent measurements, which is a speculation 
on engineering, not math - and I have lots of faith in engineers :), 
and I don't know of any way to do quantum authentication. 

so would we end up having to do some kind of authentication based on 
one-way functions (not suspectible to quantum 'attacks') to set up
a channel for use with quantum key distribution ?

-David 


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sat, 01 May 1999 14:25:21 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 30 Apr 1999 19:08:06 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>You have apparently not yet finished your study of statistics,
>Triola or whatever, as this is a textbook situation: 

More bluster. <yawn>

You most obviously have not advanced beyond the false notions of
simplistic small sample statistical testing, especially when
misapplied to such complex matters as true randomness.

>There is a theoretical parent population with an exact, simple mathematical
>model, and a sample is drawn (20,000 sequential bits).  The
>probability (based on the model) of a test like the Monobit test
>coming out positive is readily computed directly from standard
>probability theory.

You still have it wrong. Standard probability theory is based on the
concept of an ensemble average, which permits one to equate the
frequency computed from combinatorics to the absolute probability.

Only by determining the ensemble average can you ever hope to decide
whether the process has p = 1/2 to within reasonable certainty. One
simplistic small sample statistical test is not an ensemble average.

If you got one more bit of bias than the Monobit Test allows, you
would declare the TRNG to be malfunctioning, at least for that error
level. Yet if you got one bit of bias fewer than that, you would
declare that the TRNG is healthy, at least for that error level.

So, the health of the TRNG in the context of the Monobit Test at that
level of error is crucially dependent on one bit either way. That's
simply too much to accept as a real world diagnostic.

>There are
>no philosophic difficulties whatsoever in this matter, and it
>has nothing to do with ensemble averages.

More pontification. <yawn>

You continue to insist on committing the cardinal sin that Feller
warns against, namely attempting to infer the ensemble average of a
sequence generation process from the time average of one sequence.

And you are giving every evidence that you are totally beyond
redemption in your heresy.

Bob Knauer

Guns don't cause violence - violence causes violence. Gratuitous violence
in the media/entertainment industry causes violence. Government sponsored
violence like the Waco Massacre causes violence. Quit blaming law abiding
citizens for the violence in America, and blame the real sources of violence:
the federal government and the media/entertainment industry.


------------------------------

From: "PJ" <[EMAIL PROTECTED]>
Subject: Re: Encrypted Phones
Date: Sat, 1 May 1999 10:48:02 -0400

Motorola makes several consumer-available models of secure telephones.  At
least one of their cordless models has triple DES encryption between handset
and base.  All of their secure telephones operate with 3DES between
telephones, AFAIK.  I've seen them sold overseas for as little as $250 US.
They operate essentially the same as the US Government models.  I can't say
for certain but I suspect they've likely set up a key escrow system.

PJ

William R. Bishop <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
| Greetings,
|
|  The only encrypted phones I can find are being produced by the
| federal systems division of motorola (like we should trust them?).
|
|  Does anyone know of any digitally encrypted (super secure) phones
| that can be plugged in and used over standard phone lines?  PGP
| phone is the closest thing I can find, but I need boxes that are
| "plug and play" (don't require a MS O.S.).
|
|  thanks in advance,
|
| Bill Bishop
|
| --
| Academy Award winning Digital Imaging Product Development consultant
|    William R. Bishop - [EMAIL PROTECTED] - http://pcisys.net/~wrb/
|        Anti-SPAM measure - remove NOSPAM from reply address



------------------------------

From: "Scott Nelson" <[EMAIL PROTECTED]>
Crossposted-To: comp.security.unix
Subject: Re: Commercial PGP for Linux?
Date: Sat, 1 May 1999 11:47:05 -0500

>But I think I see what my problem with NAI is: that I've become
>web-spoiled. I just assumed that if you have a fancy "our products",
etc,
>web site that there would be purchase/licensing info available via the
web
>site, and it turns out that whenever you try to chase that path via the
NAI
>web page you end up at McAfee and the retail win/mac version.   It
seems
>that the catch with NAI is that you have to call them up and talk to a
>salesperson if you want their commercial package...  Since there's not
a
>*hint* of how much it costs, I'm almost afraid to call.. :o)


You do need to call to order the unix version.  It wasn't all that much
($95 I
think) but it was really hard to get it from them -- It seems that NAI
was (is?)
having legal problems with RSA.




------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Quantum Computing and Cryptography added to web site...
Date: Thu, 29 Apr 1999 12:08:51 -0500

David Crick wrote:
> I'm currently investigating quantum computation with another
> PhD student and we are in the process of writing a paper on
> our research with quantum simulators.
> 
> I'll let you know when it's available.

Thanks!  Sounds like you guys will have a lot
of fun.  You might have to remind me of all
the QM I've forgotten to explain it all tho :-)

Patience, persistence, truth,
Dr. mike

------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Random Number Generator announced by Intel
Date: Thu, 29 Apr 1999 12:21:29 -0500

John Savard wrote:

> But you'ld have to dissassemble or reverse-engineer the program supplied by
> Intel to get these random numbers, because they're only supplying drivers
> for specific operating systems, and in binary code form.

And nobody here has a logic analyzer with debugger built in,
do they?  Like it isn't possible to reverse engineer the
driver code is it?

Jeeze.  Sometimes I think upper management types do stupid things
just to give engineers a good challenge :-)

How about a bet pool: how long will it be before the bit stream data
from one of these things is available, with no OS?  I say 2-3
weeks from when the chips are "publicly" available.

Patience, persistence, truth,
Dr. mike

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Thought question: why do public ciphers use only simple ops like shift 
and XOR?
Date: Sat, 01 May 1999 17:52:01 GMT


On Wed, 28 Apr 1999 18:52:50 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>[EMAIL PROTECTED] wrote, in part:
>
>>I'm talking about Ritter's suggestion in which the protocol
>>chooses one of about 1000 ciphers at random for each message.
>>It's not enough to include strong ciphers in the pool.
>
>No, it certainly wouldn't be.
>
>While the intent was that the protocol would only do this if the
>communication was between two users each of whom was convinced that every
>one of those 1000 ciphers was satisfactory, I can now see your objection to
>his scheme as you have understood it.
>
>After all, the more ciphers being considered, the greater the chance that a
>user might be _mistaken_ about at least one of his choices.
>
>However, there are ways to correct this problem, and still gain the benefit
>that cracking a few major ciphers will not be enough to allow one to read
>messages.

I have several times proposed and justified a multifaceted "fix
package" to improve ciphering as we now know it.  Here is the
background; at the end I discuss "many cipher" system tradeoffs.  

1) First, the unfortunate and disturbing truth of cryptography is that
cryptanalysis cannot define the strength of any cipher:  Any cipher
can fail at any time.  Moreover, we are unable to in any sense
quantify the probability of such failure.  I claim we would certainly
be foolish to "trust" anything with similar characteristics in any
other environment.  

Now, if academics would always report when they had tried and failed
to break a cipher, we might have more information to go on.  And if
our Opponents would just tell us when they break our ciphers, we could
develop some estimate of their relative capabilities, and of our
cipher strengths.  But since neither of these things will occur, we
simply do not have the information to extrapolate strength or evaluate
opposing capabilities.  

To minimize or "fix" the risk of catastrophic failure, we can instead
use a "cascade" of multiple (say 3) different ciphers, each with their
own key.  (It is unrealistic to claim the cascade weakens any "strong"
cipher, for if that were the case, we would attack that cipher simply
by putting it in a cascade.)  To the extent that "breaking" the
cascade requires more effort than breaking any single cipher, the
situation has improved.  In fact, the cascade of ciphers also protects
each of the components against known plaintext and defined plaintext
attacks, which are now only available against the entire cascade as a
unit.  

2) Next, in our current situation, many ciphering programs use only
one cipher, and that cipher is "wired into" the program.  In this
situation, new cryptanalytic results (of weakness) on that cipher mean
we must replace the program.  Indeed, the history of cryptography is
that changing a code or cipher is an ordeal, often implying changing
out an entire infrastructure of hardware, protocols, and software.
The cost of this is so vast as to mitigate against acceptance of
cipher weakness, so that organizations keep using weak ciphers.  This
is terrible.  

So another part of my "fix package" is the ability to change ciphers
dynamically from a pool of ciphers.  This allows new cryptanalytic
results to be immediately effective simply by eliminating that
particular cipher from the ones in use.  

Allowing any cipher to be replaced at any time seems to imply the
ability to dynamically negotiate a different cipher.  The negotiation
can be done in the background, essentially outside of the user's view.
Once we have this, we also have the ability to make such selections
"at random," using the normal message-key facility for producing
unknowable values.  At this point we have the ability to select or
create any of many different cipher stacks, which is similar to a
modest key expansion.  

But there is another effect: the automatic and random
compartmentalization of our data.  When we have just a single cipher,
and that cipher fails, we lose everything.  But when we have different
ciphers which we use at different times, we lose only what we have
protected under that particular cipher.  By using a variety of
ciphers, we limit the consequences of single-cipher failure.  

3) One way to see cryptography is as a contest between cryptographer
and hidden cryptanalyst.  In this view, cryptographers inevitably
confront unknown Opponents with unknown resources.  One thing we do
know is that these Opponents have at least the capabilities of
academic cryptanalysts, since the academic results are published.  But
we can assume that any long-standing cryptanalytic organization will
have developments of its own, and we cannot estimate the resulting
capabilities from academic publications.  It would be dangerous (not
to say borderline incompetent) to underestimate an opposition which we
know from the start to have unknown capabilities.  

Another way to see cryptography is as a contest between our academic
cryptanalysts and the unknown Opponents:  When cipher designs are
released for open analysis, they are also released to our Opponents.
If there is something wrong in a cipher, and our academics do not find
it but our Opponents do, the Opponents win and we have a big problem.
The larger problem is that we may know nothing about this weakness,
perhaps for years.  All that time we would be happily using a "broken"
cipher.  The fact that nothing in our cipher systems prevents a
catastrophic consequence of a reasonable competition is the end of
"trust" in cryptography.  

Our problem in these confrontations is that we always lag behind.  By
concentrating on a few designs, and never fielding a cipher until it
has been widely discussed, we may avoid some weak ciphers, but we also
give our Opponents plenty of time to plan their approach.  We thus
limit ourselves to competing directly with those of superior
capabilities and in ways which are least effective for us.  By using
well-known ciphers, we set ourselves up for real problems if our
academics miss something our Opponents do not.  

By using a continually growing body of new ciphers, we place our
Opponents -- no matter who they may be -- on the defensive for the
first time.  We make them scramble to identify our ciphers, acquire
them, analyze them, break them, and build software and/or hardware to
make this routine.  This is a vastly expensive process; it is far
cheaper for us to design -- and sell -- new ciphers than it is for our
Opponents to break them.  And if we really think there is at least one
"strong" cipher, we can use that in our cipher stack, making any
weakness in the other components generally irrelevent.  If academics
can be pursuaded to look at the new designs and if they get any
cryptanalytic results, new ciphers with problems can be retired at
will.  

Yet another consequence of this fix package is that the more ciphers
we use as a matter of course, the smaller the proportion of data
exposed by any particular cryptanalytic success.  Of course, if our
cryptanalysts do find a problem, we will not use that cipher.  We only
use ciphers where no problems are known.  And in that sense an old
cipher is no better than a new one:  Any cipher can fail at any time.


The conventional theory of cryptography has us assume that our
Opponents know the cipher we use, but not the key.  But conventional
cipher systems only *have* one cipher.  When new cipher systems select
ciphers by key, our Opponents do not know which ciphers are used, and
we certainly do not need to assume that the Opponents know something
which only the key-holder knows.  

If we use any fixed set of ciphers, we can assume that our Opponents
*will* know that set.  So the appropriate cryptographic response is
simply *to* *not* *have* such a set.  By using a dynamically expanding
set of ciphers, we force our Opponents scramble to know what ciphers
are in use, to say nothing of having to break those ciphers, or
identify the ones randomly key-selected in a particular stack.  

So, in addition to the general use of "cascades" of multiple ciphers,
and the ability to dynamically negotiate ciphers (which implies the
opportunity to select cascade-sets at random, if we wish), I have
proposed that we use any cipher we want to use if it is known to play
well in cascades and not known to be weak.  If we want to be
conservative and use only a few well-known ciphers, then fine.  If we
want to insist that some particular cipher be used in the stack,
that's fine.  But if we want to take the contest to our Opponents and
make them pay to keep up, we can continuously expand our set of
ciphers, and select among them "at random," essentially
message-by-message.  

4) Does having many ciphers improve strength?  Let us suppose that all
new ciphers are "weak," say at the 40-bit level.  We assume these
ciphers are weak in the context of the known design and known
plaintext.  We can assume the Opponents know the design, and we can
assume known plaintext *for* *the* *whole* *stack* (or cascade).  But
there *is* *no* known plaintext available *for* *any* *one* *cipher*
in that stack.  Accordingly, even known plaintext attacks are not
sufficient to identify a cipher in the stack unless the other two
ciphers have already been identified and broken.  So even a stack of
weak ciphers is stronger than any weak cipher by itself.   

5) Does having many ciphers improve strength absent cascade ciphering?
Let us suppose all new ciphers have 40-bit strength under ciphertext
only attacks.  Surely, with any "small" set of ciphers, the Opponents
only need try each one they know how to break.  Of course, with a
continuing flow of new ciphers, they must "keep up" with new designs,
or be unable to read the traffic.  By simply using new ciphers we
require the Opponents to identify, acquire, analyze, break and design
software and/or hardware to attack each new cipher.  Many of these
ciphers may be similar, but many will not.  In any case, these are
vastly increased demands compared to concentrating all efforts on a
static small group of well-known designs.  

6) The unstated assumption in questioning the advantage of new ciphers
is, of course, that the old ciphers are "strong," while the new ones
are "weak."  That could be the case.  But it could just as easily be
the case that the old ciphers are in fact weak, and we just don't know
that yet.  Since we can't assume old cipher strength, I like our
chances with a continuing flow of new ciphers which at least demands a
continuing investment from our Opponents.  In this way we force our
Opponents into a massive cryptanalytic effort, rather than allowing
them to simply repeatedly exploit the old cipher everyone uses.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Thought question: why do public ciphers use only simple ops like shift 
and XOR?
Date: Sat, 01 May 1999 17:53:52 GMT


On Wed, 28 Apr 1999 22:48:10 -0600, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (Jerry Coffin) wrote:

>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>
>[ ... ] 
>
>> >5.  One of the facts of ciphering life is that we cannot prove the
>> >strength of any cipher.  Even NIST review and group-cryptanalysis does
>> >not give us proven strength in a cipher, so any cipher we select might
>> >be already broken, and we would not know.  We cannot change this, but
>> >we can greatly improve our odds as a user, by multi-ciphering under
>> >different ciphers.  Doing this means an Opponent must break *all* of
>> >those ciphers -- not just one -- to expose our data.  I like the idea
>> >of having three layers of different cipher, each with its own key.  
>> 
>> Note those last 5 words.
>
>Yes -- unfortunately, this presentation did NOT include those words -- 
>quite the contrary, it specifically said that all the forms of 
>encryption involved would use the same key.  It commented that this 
>was less than ideal, but I think it's MUCH worse than that.  It 
>basically breaks the entire idea completely.

Here is the quote:

>(I don't want to discuss here how the individual ciphers' keys are
>    defined - I know it is not optimal but as a first approximation
>    let us suppose all individual keys are identical.)

So using the same key in all ciphers is worse than he expected.  Fine,
now we know how to fix it:  We'll use different keys, as I originally
stated.  We'll also qualify ciphers for multi-ciphering duty, and
perhaps not let the same cipher occur twice in the same stack.  

Given all that, multi-ciphering (superencrypting or "cascade"
ciphering) with dynamically selected ciphers is dramatically more
secure than using the same old cipher over and over again:  

1) Abstractly, we must assume that any single cipher can be attacked
by known plaintext or defined plaintext.  But in a stack of ciphers,
no single cipher exposes both its input and output simultaneously.  So
individual ciphers in the stack are simply not exposed to known
plaintext or defined plaintext attacks.  

2) The ability to generate exponentially large numbers of
fundamentally different ciphers (by multi-ciphering and choosing the
individual component ciphers at random) means that we can partition
our data into independently-protected channels.  Failure of any
particular cipher stack thus exposes only the information on that
channel.

There has been some talk that compartmentalization is worthless, with
the argument that *any* exposure of data will reveal the essence of
all secrets.  But compartmentalization is hardly unknown: there *are*
organizations which *do* keep secrets, and compartmentalization *is*
almost universal in such organizations.  We can even suspect that some
of the organizations which do keep secrets hope that *few* will follow
their example.  

The argument against automatic random compartmentalization as a
feature essentially parallels an argument against cryptography itself:
If somebody blabs secrets, or tosses printed secrets in the trash,
there is very little cryptography can do to maintain secrecy.  So why
do we have cryptography?  Because not everyone does those things.  And
if someone places the same information in all messages, a single break
will reveal that information.  But that does not make automatic random
compartmentalization worthless for those who use it properly.  


>At that point, the real question is whether you're making anywhere 
>close to ideal use of a amount of secret information being used.  
>Right now, a lot of security is broken based on things like people 
>keeping passwords written down on slips of paper near their computer 
>or terminal -- if you triple the amount of information they have to 
>memorize, you might make the encryption better, but the overall 
>security far worse because even more people would either write things 
>down, or use easy to remember (and easy to guess) passwords and such.

No, no, no.  The keys actually used to cipher data are message keys.
We don't *remember* message keys:  They are random and big.  

We have been talking about fixing serious problems which exist in the
conventional understanding of real cipher systems.  These problems
exist in real systems using serious ciphers; they are not because we
are using toys with simple operations and remembered keys.  

Serious problems include: 

1) the inability of cryptanalysis to guarantee any significant
strength for any cipher,

2) the potential catastrophic loss of all data past and future due to
cipher failure at any time, 

3) our inability -- while using a single cipher -- to terminate any
success our Opponents have had in breaking that cipher, and

4) our inability to rationally predict the capabilities of our
Opponents, or provide any basis for estimating the probability of
cipher failure.


One of the fixes is to multi-cipher our data as a common expected
practice.  If even one of the ciphers in the stack is "strong," we
also expect the result to be strong.  In fact, we expect a cipher
stack to be *stronger* then each cipher operated alone, since a stack
of ciphers does not expose known plaintext or defined plaintext for
any of the individual ciphers in the stack.  

Another of the fixes is to dynamically select ciphers and so reduce
the effect of any already-broken combinations, by partitioning our
data into distinct compartments, each with independent protection.

There is little need in modern systems to worry about the "ideal use"
of secret information.  We have disk storage, we can encipher it, end
of story.  

In real systems, presumably, users remember a single textual key
phrase, which protects an enciphered file of channel keys (and other
data).  Each of the channel keys is used to transport random message
keys to a particular recipient.  The message keys finally encipher
data.  Message keys, of course, are only used once.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to