Cryptography-Digest Digest #262, Volume #9 Sun, 21 Mar 99 21:13:02 EST
Contents:
Re: Random Walk ("Douglas A. Gwyn")
Re: Random Walk ("Trevor Jackson, III")
Re: DIE HARD and Crypto Grade RNGs. (Coen Visser)
Re: idea (Boris Kazak)
Re: Random Walk ("Trevor Jackson, III")
Re: Random Walk (R. Knauer)
Re: Random Walk ("Trevor Jackson, III")
----------------------------------------------------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Date: Sun, 21 Mar 1999 22:19:38 GMT
"R. Knauer" wrote:
> On Sun, 21 Mar 1999 19:44:42 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >One cryptanalyzes traffic generated by one or more cryptosystems.
> An analysis of the "traffic" constitutes a consideration of whether
> the underlying keystream is truly random or not.
No, in practice none of the key streams is "truly random" (because
such a system is infeasible) and that is taken for granted.
> >All this discussion of "randomness" is off-topic.
> Then avail yourself of your prerogative not to read any more of these
> posts. If you want to imbibe the snake oil of statistical testing, be
> our guest. You will make cryptanalysts very happy indeed.
To the contrary, the more people are guided by your miconceptions,
the happier the intelligence community will be, as it delays the
eventual spread of good encryption in the public sector. My role
in this newsgroup is primarily to correct misconceptions (of which
there are many), since I advocate privacy protection for the public.
> For those of us who know that randomness is at the heart of
> cryptography, we shall continue the quest for an understanding of its
> true meaning even in your absense.
"It's not what you don't know that's the problem, it's what you know
for sure that ain't so."
------------------------------
Date: Sun, 21 Mar 1999 14:37:22 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
R. Knauer wrote:
>
> On Sun, 21 Mar 1999 09:18:14 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>
> >> Maybe the best you can do is shut the TRNG down often, disturb it in
> >> some manner - like removing the radioactive source or changing out the
> >> diode - so that it becomes a physically different device, however
> >> slightly. Or maybe you could put it in an oven and blindly change the
> >> temperature setpoint often. Whatever.
>
> >Now let's not avoid the issue by obfuscation. The phrase "blindly
> >change" actually means "randomly change". So, does the rnadomness of
> >the temperature set point have to be "crypto-grade" or will any old
> >"random" sequence do the trick?
>
> I never implied that the distrubance of the TRNG required random
> input. Just shutting it off periodically could cause it to shift
> slightly about some mean point centered about p = q= 1/2.
>
> But that brings up an interesting question (to me, at least). What if
> you had a two RNGs and used one to seed the other. Included in the key
> would be the occurance of the seed change. PRNG1 one would have its
> key K1 and it would be used to build the key for PRNG2. After a
> sequence of length N was generated by PRNG2 (where N is smaller than
> the period inherent in PRNG2), the next key from PRNG1 would be used
> to seed PRNG2. Thus, K1 and N would be the overall key for the cipher.
> This process could be repeated endlessly. If you wanted, you could
> have multiple values for N specifying subsequent occurances of the key
> change.
Arrangements like this have been comtemplated in the past. They do
provide a bit more security that a single RNG, but the increment is so
small that it does not justify the additional complexity. For a
Linear-congruential generator, knowing a particular state allows an
adversary to predict all states. Since the state can be inferred from a
small sample (I do know know how small) of the output, you have to rekey
very frequently. If you erkey very frequently, most of your security
lies in the key source RNG. After a few reky operations it's state will
be known and the output of the entire ensemple predictable.
For LFSR generators of width W, knowing 2W bits of contiguous output
completely determines the state of the device. Cascaded LFSRs, where
the first keys the second (etc.), share the same fate as
linear-congruential generators. You have to rekey so frequentl that the
key source is effectively exposed to analysis.
Alternate arrangements of "decimating" (sic) generators use a primary
bitstream to gate the output of a secondary stream. It turns out that
the gating operation forces correlation of the two generators (typically
75% of the outputs will be identical), and it is not hard to deduce the
internal states of both generators.
Still another variant is to have one generator clock another. It also
forces correlation.
There's a wide literature on this, but most of the composition process
is based on art (hunch & intuition) whereas most of the anaylytic
results (if not the analysis process) are based on rigorous methods.
Clearly the analysis are beating the composers in this niche.
>
> Or perhaps there could be a two-key PRNG, where the keys are created
> by two other PRNGs, and key changes are asynchronous. How could a
> cryptanalyst ever hope to figure out such a system if the PRNGs were
> strong to begin with?
If the resulting ensemble is deterministic ("asynchronous" could mean
not) there's probably an analytic path to a solution with a small work
factor. If it includes some non-deterministic input, we're back to
hardware generators, and you might as well use the conditioned output of
the hardware. In fact, the 2x2 PRNG arrangement you described earlier
would simply become a conditioner for the hardware generator.
>
> In a third variation of that theme, what if you encrypted a 128-bit
> IDEA session key with a 128-bit IDEA cipher and used that key to
> encrypt the message with a 128-bit IDEA cipher. Then you would send
> the combination of the 128-bit session key and the ciphertext. Since
> the session key and its encryption are of the same length, it is
> proveably secure regardless of what the session key is, and each
> cipher you send will have a new key, thus thwarting cryptanalytic
> attacks on the main cipher.
Not quite. The attacker does not have to find the outer key. He can
simply look for the inner key, and, since it is smaller than a three
word message, most messages will be beyond the unicity distance. I.e.,
you only get the security IDEA provides no matter how many key levels
you use.
>
> Bob Knauer
>
> "If you think health care is expensive now, wait until it's FREE!"
------------------------------
From: [EMAIL PROTECTED] (Coen Visser)
Subject: Re: DIE HARD and Crypto Grade RNGs.
Date: 21 Mar 1999 23:14:53 GMT
[EMAIL PROTECTED] (Patrick Juola) writes:
>Coen Visser <[EMAIL PROTECTED]> wrote:
>>Could you explain this. I thought that the Kolmogorov complexity
>>was semi-computable, you can approximate it only from above. So you
>The same way that we know that some progams will halt, despite the
>formal undecidability of the halting problem. If you have a string
>of a specified form -- for various different specifications, of course --
>then the specific case of that string or that *class* of strings
>may be exactly solvable.
If you have a class of strings for which you claim you can compute,
using a partial recursive function, the Kolmogorov complexity C(x) than
that class can not be infinite. There is no partial recursive function
that coincides with C(x) over the whole of its (infinite) domain.
Proof by reference ;-) see page 121 Li & Vitanyi 2nd edition.
So we are left discussing nonempty finite classes of strings.
The Kolmogorov complexity of a string of length one could be
approximated, but those are not the strings of practical interest.
>It's easy enough for me to prove that a single-function assembly
>language program with only forward branches will halt. This doesn't
>mean that I've solved the halting problem, but it does mean that *IF*
>my problem of interest can be expressed as such a program, I can
>prove something of interest.
Yes but it is easy to see that that statement is not the same as what
we are discussing. There are an infinite number of programs with only
forward branches. And I have hopefully established that the set of
strings for which you might be able to compute C(x) can only be infinite.
Regards,
Coen Visser
------------------------------
From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: idea
Date: Sun, 21 Mar 1999 16:49:37 -0500
Reply-To: [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
>
> Don't you just divide in IDEA as the inverse?
>
> For example if your plain text p1 -> p4 = 1 -> 4, and the first 4 s-boxes are
> 1 -> 4. You would have
>
> encode:
> d1 = p1 * s1 = 1
> d2 = p2 + s2 = 2
> d3 = p3 + s3 = 6
> d4 = p4 * s4 = 16
>
> decode
> p1 = d1 / s1 = 1
> p2 = d2 - s2 = 2
> p3 = d3 - s3 = 3
> p4 = d4 / s4 = 4
>
> Tom
>
> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
=========================
If you have the source code, find there the function
ideaInvertKey (.....) and figure out how it works.
In modular arithmetic the inverse for multiplication is not
division, but multiplication by a number which is the
multiplicative inverse (mod N).
Best wishes BNK
------------------------------
Date: Sun, 21 Mar 1999 20:21:44 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
R. Knauer wrote:
>
> On Sun, 21 Mar 1999 18:26:18 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
>
> >If the "statistical test" you have in mind requires assignment of an
> >a priori probability, then no wonder you're confused about this issue.
>
> I am not confused in the least about this issue.
>
> I am merely reporting what the experts claim about the efficacy of
> statisitcal testing to characterize randomness.
Crap. I defy you to find a single expert who believes statistical
testing is "worthless" in the evaluation of random number generators.
Cite someone; ANYONE, other than your own posts.
I would take the time
> to repeat that extensive record, some of which I have posted here, but
> that would only bore the readers who have seen it once already.
>
> For those who missed those posts, I suggest searching the Usenet
> archives for key words such as "TRNG", "crypto-graderandom", etc. A
> few weeks back I posted a series of quotes from a book on quantum
> computing which synthesized the physicist's view of the futility of
> using statistical tests to characterize random numbers. The authors
> used the term "apparent randomness" to point out that statistical
> tests only expose some preconceived notion of randomness which has
> nothing to do with true randomness.
>
> BTW, when I used the term "a priori probability" above I meant the
> probability model. Statistical tests assume a particular model against
> which to test the numbers, and the validity of those tests are based
> on the limit of large numbers - the so-called frequency interpretation
> of probability.
>
> But as you should know, finite sequences are notorious for failing
> such neat little tests. Putting "confidence limits" on the statisitcal
> tests, based on the assumption that the tests are being applied to
> infinite sequences, is simply a falacy of the first order. Kolmogorov
> himself made that point very clear.
>
> For example, in the (uniform one-dimensional) random walk, the
> particle is expected to visit the origin an infinite number of times,
> thereby swamping those paths that do not visit the origin. But that
> only happens in the infinite limit, where all sorts of things happen
> that never happen in finite walks. For sequences that are finite, the
> particle rarely visits the origin.
>
> As I quoted Feller the other day, in one particular random walk of
> 10,000 steps the particle visited the origin only 87 times, which
> defies any statistical measure based on the law of large numbers you
> can come up with. Put another way, if you had applied statistical
> tests to that particular random walk, you would have rejected it as
> random - yet it was a true random walk based on the Rand Tables.
>
> >To take one of the simplest examples of a statistical test used in
> >randomness testing, Pearson's chi-squared, no a priori probabilities
> >are involved. The test is for independence of distributions.
> >(An improved but less well-known version, based entirely on
> >information theory, was developed by Kullback. The tests approach
> >each other for large samples, but Kullback's works down to the
> >smallest sample.)
>
> There is an a priori assumption regarding the nature of the
> distribution inherent in such statistical tests - that assumption is
> based on the probability model that is chosen for the process being
> tested, and gains its validity only in the infinite limit. Nobody is
> claiming that infinitely long random numbers will fail appropriate
> statistical tests. We full expect them to because of the law of large
> numbers. But nowhere is it *certain* that finite sequences must obey
> some a priori probability model and the statistical tests that result
> from it.
>
> >Such tests are never "for" randomness, but rather "against"
> >randomness. I.e. if you formulate a hypothesis about uniformity
> >of the distribution(s), the tests can cast doubt on the hypothesis,
> >if the observed data doesn't match the hypothesis very well.
> >The degree of doubt is quantified, in the ensemble sense.
>
> That is only true in the limit of "large numbers". How "large" is
> "large"? Apparently it is exceedingly large to be meaningful, like
> infinitely large or nearly infinitely large. How large is that?
>
> How many steps in the random walk are necessary before the paths begin
> to behave the staitistics expected from some postulated ad hoc
> probabilistic model - when the strong law of large numbers starts to
> kick in a significant way? If we (correctly) expect an infinite number
> of visits to the origin in the infinite limit, how many steps would
> cause that probability frequency to begin to diverge? Can you safely
> say that a random walk of 10^12 steps obeys statistical expectations,
> or would it take 10^20 steps or perhaps 10^128 steps?
>
> The problem with both probabilistic models based on the law of large
> numbers is that they can only assign a probability that the process is
> modelled correctly. As Kolmogorov pointed out, that is highly circular
> reasoning. And that probability of confidence tells you nothing about
> the behavior of the process for finite sequences. All it tells you is
> that in the infinite limit things will happen that way - that is, the
> assigned probability has validity only in the limit of infinite
> sequences.
>
> >> That snake oil might be enough to satisfy the amateurs, but it doesn't
> >> fool physicists.
>
> >Such a statement is insulting both to professional statisticians
> >and to physicists.
>
> Such a statement is nevertheless true - and maybe professional
> statiticians need to be insulted, judging from the fact that they
> didn't get the random walk axis crossing problem anywhere near correct
> when Feller conducted his survey.
>
> And what I am stating is not in the least insulting to physicists,
> since they have been saying the same thing about random processes form
> the very beginning, whether radioactive decay or Brownian motion or
> even quantum mechanics itself. As a class of professionals, physicists
> have the best grasp of the practical aspects of random processes.
>
> If you would take the time to read such seminal works as those I have
> cited earlier, you would see that what I say is true.
>
> 1) Li & Vitanyi on Kolmogorov Complexity;
>
> 2)Williams & Clearwater on quantum computers;
>
> 3) Feller on probability theory.
>
> All three works give numerous examples of how finite random numbers
> disobey statistical models.
>
> >> Statisitcal testing of keystreams for apparent randomness is pure 100%
> >> virgin snake oil from the first squeezings.
>
> >No matter how often you repeat "snake oil", it doesn't make it so.
>
> And no matter how many times you claim that I am incorrect, it doesn't
> make it so. Remember that I am quoting the comments of the experts - I
> am not relying on any expertise on my part. And I know I am not
> "misinterpreting" what they say because I check their references and
> look for independent corroberation from different fields.
>
> Was von Neumann full of BS when he made his famous pronouncements
> about the futility of attemtping to generate random numbers
> algorithmically? If so, then what makes you think that a given random
> number can be characterized by an algorithmic procedure such as a
> statisitcal test?
>
> If you could characterize random numbers by an algorithmic procedure
> like statistical testing, then you could use that very algorithmic
> procedure to generate random numbers - which would violate what anyone
> with any experience in crypto knows is not the case.
>
> Let me restate this, because I believe it is of fundamental
> significance. Because you cannot generate true random numbers
> algortihmically with a classical computer, you cannot contruct
> algorithmic tests to decide that any given number is truly random - or
> otherwise you could use that very algorithm to generate true random
> numbers on a classical computer.
>
> >This is a waste of time for several reasons, one being that there
> >is no "random walk" involved,
>
> You completely missed the point. But no matter.
>
> >and the other being that even for a
> >random-noise keystream generator, systematic bias can be removed
> >much more simply in a time-invariant design.
>
> Just how do you propose to remove this bias? And when you do, how do
> you know it results in crypto-grade randomness? After all, if the
> proposed method of removing bias is algorithmic then you run the risk
> that its operation will destroy any true randomness that was present
> before the operation was performed.
>
> For example, the number may indeed be free from single-bit bias after
> the operation, but what about other bit-group biases that might have
> been "distilled" by the operation? What if in the procedure to remove
> one kind of bias (e.g., 1-bit bias) you produced a higher
> concentration of k-bit biases?
>
> Bob Knauer
>
> "If you think health care is expensive now, wait until it's FREE!"
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 01:45:14 GMT
Reply-To: [EMAIL PROTECTED]
On Sun, 21 Mar 1999 14:37:22 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:
>R. Knauer wrote:
>>
>> On Sun, 21 Mar 1999 09:18:14 -0500, "Trevor Jackson, III"
>> <[EMAIL PROTECTED]> wrote:
>>
>> >> Maybe the best you can do is shut the TRNG down often, disturb it in
>> >> some manner - like removing the radioactive source or changing out the
>> >> diode - so that it becomes a physically different device, however
>> >> slightly. Or maybe you could put it in an oven and blindly change the
>> >> temperature setpoint often. Whatever.
>>
>> >Now let's not avoid the issue by obfuscation. The phrase "blindly
>> >change" actually means "randomly change". So, does the rnadomness of
>> >the temperature set point have to be "crypto-grade" or will any old
>> >"random" sequence do the trick?
>>
>> I never implied that the distrubance of the TRNG required random
>> input. Just shutting it off periodically could cause it to shift
>> slightly about some mean point centered about p = q= 1/2.
>>
>> But that brings up an interesting question (to me, at least). What if
>> you had a two RNGs and used one to seed the other. Included in the key
>> would be the occurance of the seed change. PRNG1 one would have its
>> key K1 and it would be used to build the key for PRNG2. After a
>> sequence of length N was generated by PRNG2 (where N is smaller than
>> the period inherent in PRNG2), the next key from PRNG1 would be used
>> to seed PRNG2. Thus, K1 and N would be the overall key for the cipher.
>> This process could be repeated endlessly. If you wanted, you could
>> have multiple values for N specifying subsequent occurances of the key
>> change.
>
>Arrangements like this have been comtemplated in the past. They do
>provide a bit more security that a single RNG, but the increment is so
>small that it does not justify the additional complexity. For a
>Linear-congruential generator, knowing a particular state allows an
>adversary to predict all states. Since the state can be inferred from a
>small sample (I do know know how small) of the output, you have to rekey
>very frequently. If you erkey very frequently, most of your security
>lies in the key source RNG. After a few reky operations it's state will
>be known and the output of the entire ensemple predictable.
>
>For LFSR generators of width W, knowing 2W bits of contiguous output
>completely determines the state of the device. Cascaded LFSRs, where
>the first keys the second (etc.), share the same fate as
>linear-congruential generators. You have to rekey so frequentl that the
>key source is effectively exposed to analysis.
>
>Alternate arrangements of "decimating" (sic) generators use a primary
>bitstream to gate the output of a secondary stream. It turns out that
>the gating operation forces correlation of the two generators (typically
>75% of the outputs will be identical), and it is not hard to deduce the
>internal states of both generators.
>
>Still another variant is to have one generator clock another. It also
>forces correlation.
>
>There's a wide literature on this, but most of the composition process
>is based on art (hunch & intuition) whereas most of the anaylytic
>results (if not the analysis process) are based on rigorous methods.
>Clearly the analysis are beating the composers in this niche.
>
>
>>
>> Or perhaps there could be a two-key PRNG, where the keys are created
>> by two other PRNGs, and key changes are asynchronous. How could a
>> cryptanalyst ever hope to figure out such a system if the PRNGs were
>> strong to begin with?
>
>If the resulting ensemble is deterministic ("asynchronous" could mean
>not) there's probably an analytic path to a solution with a small work
>factor. If it includes some non-deterministic input, we're back to
>hardware generators, and you might as well use the conditioned output of
>the hardware. In fact, the 2x2 PRNG arrangement you described earlier
>would simply become a conditioner for the hardware generator.
>
>>
>> In a third variation of that theme, what if you encrypted a 128-bit
>> IDEA session key with a 128-bit IDEA cipher and used that key to
>> encrypt the message with a 128-bit IDEA cipher. Then you would send
>> the combination of the 128-bit session key and the ciphertext. Since
>> the session key and its encryption are of the same length, it is
>> proveably secure regardless of what the session key is, and each
>> cipher you send will have a new key, thus thwarting cryptanalytic
>> attacks on the main cipher.
>
>Not quite. The attacker does not have to find the outer key. He can
>simply look for the inner key, and, since it is smaller than a three
>word message, most messages will be beyond the unicity distance. I.e.,
>you only get the security IDEA provides no matter how many key levels
>you use.
Oh well, on to the next question, eh.
It's good to have experts available to point out the frailty of
schemes proposed by Informed Laymen (tm) like me.
Bob Knauer
"If you think health care is expensive now, wait until it's FREE!"
------------------------------
Date: Sun, 21 Mar 1999 14:48:24 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk
Douglas A. Gwyn wrote:
>
> "R. Knauer" wrote:
> > On Sun, 21 Mar 1999 05:23:14 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> > wrote:
> > >That's wrong. Statistical tests, properly applied and administered,
> > >can provide information useful in judging the likelihood of various
> > >properties of a sequence.
> > I disagree. The problem is that you must assign an a priori
> > probability that you do not know in order to make sense of the tests.
>
> If the "statistical test" you have in mind requires assignment of an
> a priori probability, then no wonder you're confused about this issue.
>
> To take one of the simplest examples of a statistical test used in
> randomness testing, Pearson's chi-squared, no a priori probabilities
> are involved. The test is for independence of distributions.
> (An improved but less well-known version, based entirely on
> information theory, was developed by Kullback. The tests approach
> each other for large samples, but Kullback's works down to the
> smallest sample.)
Could you please amplify your description of this or provide a
reference? It sounds extremely interesting.
>
> Such tests are never "for" randomness, but rather "against"
> randomness. I.e. if you formulate a hypothesis about uniformity
> of the distribution(s), the tests can cast doubt on the hypothesis,
> if the observed data doesn't match the hypothesis very well.
> The degree of doubt is quantified, in the ensemble sense.
This is the fundamental non-invertible condition of order/randomness.
If you find order you lack randomness. If you find randomness you
haven't looked hard enough (for order).
In fact the statistical tests do *not* evaluate randomness at all.
There is no criteria or "model" for randomness. There are more or lesss
detailed descriptions of "true" randomness, but the are descriptive, not
prescriptive. It's like obscentity. On knows it when one sees it.
In fact (to paraphrase your leading sentence above) statistical tests
evaluate *order*. And not all kinds of order at that. If a sample
contains no order detected by a particular test it does not indicate the
absence of all order.
What the snake-oil witch hunters fail to appreciate is there is merit in
finding order. It is useful to disqualify orderly samples.
>
> > That snake oil might be enough to satisfy the amateurs, but it doesn't
> > fool physicists.
>
> Such a statement is insulting both to professional statisticians
> and to physicists.
Responding to such statements is a waste of time for everyone involved.
>
> > For someone like you who claims to understand such arcane concepts as
> > Hidden Markov Models, you should most certainly undertand that even
> > the slightest deviation from prefect symmetry will upset the random
> > walk catostrophically. Even for p = q = 1/2, there are many anomolies
> > in the random walk that defy both naive intuition and statistical
> > models based on infinite steps. Origin crossing is but one of those
> > "bizarre abberations" which defies the false intuition of the "law of
> > averages" - whatever that is.
>
> HMMs are hardly "arcane"; they're central to many modern applications.
>
> Properties of random walks are quite well known and have been for many
> decades. They might have been "counterintuitive" back in the 1930s,
> but then so was nearly everything.
>
> The "law of averages" is a straw man, since statisticians don't use
> any such "law" -- that's a layman's notion.
> There is a precise "law of large numbers", but it doesn't imply the
> incorrect conclusions you seem to attribute to the "law of averages".
>
> > Statisitcal testing of keystreams for apparent randomness is pure 100%
> > virgin snake oil from the first squeezings.
>
> No matter how often you repeat "snake oil", it doesn't make it so.
>
> > One consideration is that if there is bias (say, p >q), we know the
> > random walk will be adversely affected, which could result in a
> > cryptabalyst being able to break the ciphers. It would seem,
> > therefore, that the design of the TRNG should cause p < q for some
> > period of generation, to offset the effects of p > q. Whether this is
> > applicable and, if so, how it would be accomplished is something I am
> > not equipped to address. I can only ask questions, not answer them.
> > Maybe the best you can do is shut the TRNG down often, disturb it in
> > some manner - like removing the radioactive source or changing out the
> > diode - so that it becomes a physically different device, however
> > slightly. Or maybe you could put it in an oven and blindly change the
> > temperature setpoint often. Whatever.
>
> This is a waste of time for several reasons, one being that there
> is no "random walk" involved, and the other being that even for a
> random-noise keystream generator, systematic bias can be removed
> much more simply in a time-invariant design.
>
> > We all know about the key management problems inherent in an OTP
> > cryptosystem. But that does not stop us from addressing the security
> > issues in terms of cryptanalytic attack. The OTP serves as a
> > convenient paradigm for discussing the real issues surrounding our
> > notions of crypto-grade security.
>
> I don't think so. Practical systems expand a small number of key
> bits into parameters for some regular scheme, wherein lie any
> inherent cryptographic weaknesses of the system. Consideration
> of one-time pads completely misses this point.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************