Cryptography-Digest Digest #263, Volume #9       Sun, 21 Mar 99 22:13:03 EST

Contents:
  Re: Random Walk ("Trevor Jackson, III")
  Re: Partial key exposure ("Anthony Lineham")
  Re: Partial key exposure ("Anthony Lineham")
  Re: Random Walk (R. Knauer)
  Scramdisk (zom)
  Re: Random Walk (R. Knauer)
  Re: Random Walk (R. Knauer)
  Re: Random Walk (R. Knauer)
  Re: Random Walk (R. Knauer)
  Re: PGP Protocol question (Shawn Willden)
  Re: IDEA algorithm ([EMAIL PROTECTED])
  Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements 
("Trevor Jackson, III")

----------------------------------------------------------------------------

Date: Sun, 21 Mar 1999 20:37:28 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Random Walk

R. Knauer wrote:
> 
> On Sun, 21 Mar 1999 18:26:18 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> 
> >If the "statistical test" you have in mind requires assignment of an
> >a priori probability, then no wonder you're confused about this issue.
> 
> I am not confused in the least about this issue.
> 
> I am merely reporting what the experts claim about the efficacy of
> statisitcal testing to characterize randomness. I would take the time
> to repeat that extensive record, some of which I have posted here, but
> that would only bore the readers who have seen it once already.
> 
> For those who missed those posts, I suggest searching the Usenet
> archives for key words such as "TRNG", "crypto-graderandom", etc. A
> few weeks back I posted a series of quotes from a book on quantum
> computing which synthesized the physicist's view of the futility of
> using statistical tests to characterize random numbers. The authors
> used the term "apparent randomness" to point out that statistical
> tests only expose some preconceived notion of randomness which has
> nothing to do with true randomness.
> 
> BTW, when I used the term "a priori probability" above I meant the
> probability model. Statistical tests assume a particular model against
> which to test the numbers, and the validity of those tests are based
> on the limit of large numbers - the so-called frequency interpretation
> of probability.
> 
> But as you should know, finite sequences are notorious for failing
> such neat little tests. Putting "confidence limits" on the statisitcal
> tests, based on the assumption that the tests are being applied to
> infinite sequences, is simply a falacy of the first order. Kolmogorov
> himself  made that point very clear.
> 
> For example, in the (uniform one-dimensional) random walk, the
> particle is expected to visit the origin an infinite number of times,
> thereby swamping those paths that do not visit the origin. But that
> only happens in the infinite limit, where all sorts of things happen
> that never happen in finite walks. For sequences that are finite, the
> particle rarely visits the origin.
> 
> As I quoted Feller the other day, in one particular random walk of
> 10,000 steps the particle visited the origin only 87 times, which
> defies any statistical measure based on the law of large numbers you
> can come up with. Put another way, if you had applied statistical
> tests to that particular random walk, you would have rejected it as
> random - yet it was a true random walk based on the Rand Tables.

The Rand Tables are not "true random".  Far from it.  They were heavily
conditioned during and fater the generation process.

> 
> >To take one of the simplest examples of a statistical test used in
> >randomness testing, Pearson's chi-squared, no a priori probabilities
> >are involved.  The test is for independence of distributions.
> >(An improved but less well-known version, based entirely on
> >information theory, was developed by Kullback.  The tests approach
> >each other for large samples, but Kullback's works down to the
> >smallest sample.)
> 
> There is an a priori assumption regarding the nature of the
> distribution inherent in such statistical tests - that assumption is
> based on the probability model that is chosen for the process being
> tested, and gains its validity only in the infinite limit. Nobody is
> claiming that infinitely long random numbers will fail appropriate
> statistical tests. We full expect them to because of the law of large
> numbers. But nowhere is it *certain* that finite sequences must obey
> some a priori probability model and the statistical tests that result
> from it.
> 
> >Such tests are never "for" randomness, but rather "against"
> >randomness.  I.e. if you formulate a hypothesis about uniformity
> >of the distribution(s), the tests can cast doubt on the hypothesis,
> >if the observed data doesn't match the hypothesis very well.
> >The degree of doubt is quantified, in the ensemble sense.
> 
> That is only true in the limit of "large numbers". How "large" is
> "large"? Apparently it is exceedingly large to be meaningful, like
> infinitely large or nearly infinitely large. How large is that?

No.  The size of the sample does not invalidate the confidence of the
result.  In fact, if you inspect the confidence calculation process you
will find that the sample size and the population size are both used to
quantify the "representativeness" of the sample with respect to the
population.

This has nothing to do with "large" numbers, convergence toward the
mean, or "the law of averages".

> 
> How many steps in the random walk are necessary before the paths begin
> to behave the staitistics expected from some postulated ad hoc
> probabilistic model - when the strong law of large numbers starts to
> kick in a significant way? If we (correctly) expect an infinite number
> of visits to the origin in the infinite limit, how many steps would
> cause that probability frequency to begin to diverge? Can you safely
> say that a random walk of 10^12 steps obeys statistical expectations,
> or would it take 10^20 steps or perhaps 10^128 steps?

For both sample sizes you will get a confidence quantity.  Learn how
those are calculated and you will not be uncomfortable with the results.

> 
> The problem with both probabilistic models based on the law of large
> numbers is that they can only assign a probability that the process is
> modelled correctly. As Kolmogorov pointed out, that is highly circular
> reasoning. And that probability of confidence tells you nothing about
> the behavior of the process for finite sequences. All it tells you is
> that in the infinite limit things will happen that way - that is, the
> assigned probability has validity only in the limit of infinite
> sequences.
> 
> >> That snake oil might be enough to satisfy the amateurs, but it doesn't
> >> fool physicists.
> 
> >Such a statement is insulting both to professional statisticians
> >and to physicists.
> 
> Such a statement is nevertheless true - and maybe professional
> statiticians need to be insulted, judging from the fact that they
> didn't get the random walk axis crossing problem anywhere near correct
> when Feller conducted his survey.
> 
> And what I am stating is not in the least insulting to physicists,
> since they have been saying the same thing about random processes form
> the very beginning, whether radioactive decay or Brownian motion or
> even quantum mechanics itself. As a class of professionals, physicists
> have the best grasp of the practical aspects of random processes.
> 
> If you would take the time to read such seminal works as those I have
> cited earlier, you would see that what I say is true.
> 
> 1) Li & Vitanyi on Kolmogorov Complexity;
> 
> 2)Williams & Clearwater on quantum computers;
> 
> 3) Feller on probability theory.
> 
> All three works give numerous examples of how finite random numbers
> disobey statistical models.
> 
> >> Statisitcal testing of keystreams for apparent randomness is pure 100%
> >> virgin snake oil from the first squeezings.
> 
> >No matter how often you repeat "snake oil", it doesn't make it so.
> 
> And no matter how many times you claim that I am incorrect, it doesn't
> make it so. Remember that I am quoting the comments of the experts - I
> am not relying on any expertise on my part. And I know I am not
> "misinterpreting" what they say because I check their references and
> look for independent corroberation from different fields.
> 
> Was von Neumann full of BS when he made his famous pronouncements
> about the futility of attemtping to generate random numbers
> algorithmically? If so, then what makes you think that a given random
> number can be characterized by an algorithmic procedure such as a
> statisitcal test?
> 
> If you could characterize random numbers by an algorithmic procedure
> like statistical testing, then you could use that very algorithmic
> procedure to generate random numbers - which would violate what anyone
> with any experience in crypto knows is not the case.
> 
> Let me restate this, because I believe it is of fundamental
> significance. Because you cannot generate true random numbers
> algortihmically with a classical computer, you cannot contruct
> algorithmic tests to decide that any given number is truly random - or
> otherwise you could use that very algorithm to generate true random
> numbers on a classical computer.

Yes.  So what?  The question is not how to test for randomness.  There
is no such test.  The question is how to test for NON-randomness.  That
is what statistical tests do.

> 
> >This is a waste of time for several reasons, one being that there
> >is no "random walk" involved,
> 
> You completely missed the point. But no matter.
> 
> >and the other being that even for a
> >random-noise keystream generator, systematic bias can be removed
> >much more simply in a time-invariant design.
> 
> Just how do you propose to remove this bias? And when you do, how do
> you know it results in crypto-grade randomness? After all, if the
> proposed method of removing bias is algorithmic then you run the risk
> that its operation will destroy any true randomness that was present
> before the operation was performed.

The proof depends on the initial assumptions. (1) that the samples have
a particular bias, which is known.  (2) the samples are otherwise
independent.  If you cancel out the bias by appropriate recoding (and
concomitant loss of "volume") you will be left with a sequence of
independent, unbiased elements.

If you do not know the bias, or know all of them, it gets harder.  But
there is no reason to assume that processing the data *imposes* a bias
upon it.

> 
> For example, the number may indeed be free from single-bit bias after
> the operation, but what about other bit-group biases that might have
> been "distilled" by the operation? What if in the procedure to remove
> one kind of bias (e.g., 1-bit bias) you produced a higher
> concentration of k-bit biases?

You look for them.  If you find them, you remove them.  If you don't
find them with a suitable tool (statistical test), then they are not
present.

> 
> Bob Knauer
> 
> "If you think health care is expensive now, wait until it's FREE!"

------------------------------

From: "Anthony Lineham" <[EMAIL PROTECTED]>
Subject: Re: Partial key exposure
Date: Mon, 22 Mar 1999 13:49:12 +1200

David A Molnar wrote in message <7cq43t$f6n$[EMAIL PROTECTED]>...
>Anthony Lineham <[EMAIL PROTECTED]> wrote:
>
>> this is really an algorithm dependent situation so think of it terms of a
>> DES-like algorithm.
>
>out of curiosity, is the reason you're revealing six bits of each 16-bit
>divison of the key tied up with something like the input to an S-box?


No, not really.
I'm working on an algorithm that has some key dependent features.
My concern is that should the usage of one of these features be detected
through some analysis of the cipher text, 6 bits worth of information about
the key bits that selected the feature would be revealed. It should be noted
that at this stage I know of no way in which one of these features could be
detected (I may be corrected as time progresses) and am just trying to
assess the potential impact should the unthinkable happen.


>
>I guess I should really try to find the number of contiguous bits that
>cause bits to come through the expansion permutation in the same S-box
>input with fairly high probability, but a small enough number to be
>at least a bit plausible (six bits of key is maybe pushing it. no one
>       will give me sixteen...)
>
>So this is actually making me very nervous about (2),....

Sorry I couldn't follow why this makes you nervous.

>for a brute force attack, (1) certainly means you have that much less
>space to search - maybe the difference between feasible and not for
>marginal key lengths and certain attackers. 2**56--> 2**50 for example,
against
>someone with a Beowulf cluster.
>

What is a Beowulf cluster?

Thanks for your very detailed response,

Anthony Lineham




------------------------------

From: "Anthony Lineham" <[EMAIL PROTECTED]>
Subject: Re: Partial key exposure
Date: Mon, 22 Mar 1999 13:53:35 +1200


>The two situations are equivilent.  In both cases, the size of keyspace is
>reduced by a factor of 2**6, and in both cases, it is trivial to enumerate
the
>reduced keyspace.  In both cases, the expected time to perform a brute
force
>search after the breach is a factor of 2**6 less than then expected to
brute
>force the original cipher.
>

I think I agree with you.

Thanks for you response,

Anthony



------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 02:05:27 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 21 Mar 1999 22:19:38 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>To the contrary, the more people are guided by your miconceptions,
>the happier the intelligence community will be, as it delays the
>eventual spread of good encryption in the public sector.  My role
>in this newsgroup is primarily to correct misconceptions (of which
>there are many), since I advocate privacy protection for the public.

Oh, you're back, eh. I thought you had decided that this topic was not
worthy of your further participation. This business of randomness
kinda grabs you by the Ol' Gonads, does it?

You must prove your contentions, or otherwise they are just so much
snake oil. Thus far all we have seen from you is an appeal to
statisical bullcrap. When are you going to show that people like
Kolmogorov, Li & Vitanyi, Feller, Williams and Clearwater, et al
(including Chaitin), are wrong?

>"It's not what you don't know that's the problem, it's what you know
>for sure that ain't so."

I remind you of Chaitin's profound notion of "The Unknowable". He has
demonstrated that even arithmetic is not what it purports to be.

True Randomness is at the heart of reality - and I personally love it,
because only with True Randomness can everything that could happen
ever be possible.

I leave the Gentle Reader (tm) with this metaphysical comment:

"If you want to build a robust universe, one that will never go wrong,
then you don't want to build it like a clock, for the smallest bit of
grit will cause it to go awry. However, if things at the base are
utterly random, nothing can make them more disordered. Complete
randomness at the heart of things is the most stable situation
imaginable - a divinely clever way to build a universe."
-- Heinz Pagels

Bob Knauer

"If you think health care is expensive now, wait until it's FREE!"


------------------------------

From: [EMAIL PROTECTED] (zom)
Subject: Scramdisk
Date: Mon, 22 Mar 1999 02:13:38 GMT

Excuse me but I am not new to computers.  Just new to crypt.  That is
the first thing I did and then I did a find files trying to find
something that was taking up all that free space but found nothing.
The scandisk found nothing.

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 02:17:12 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 21 Mar 1999 14:48:24 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>What the snake-oil witch hunters fail to appreciate is there is merit in
>finding order.  It is useful to disqualify orderly samples.

There is no distinction between order and non-order in cryptography.
As long as a True Random Number Generator (TRNG) meets the fundamental
specification for crypto-grade randomness, all finite sequences are
valid for keystreams, even those which exhibit great regularity.

If I use a TRNG to generate a keystream, and there is a long run of
all 0s, you have no way to decide with reasonable certainty that the
ciphertext is the same as the plaintext.
 
>> > That snake oil might be enough to satisfy the amateurs, but it doesn't
>> > fool physicists.
 
>> Such a statement is insulting both to professional statisticians
>> and to physicists.

>Responding to such statements is a waste of time for everyone involved.

Then do not respond to them.

Just understand that my making such statements is not tempered by
either your response or your non-response. I really do not give a shit
one way or the other.

Einstein was forced to publish his concepts on special relativity in
an obscure journal because no one would accept his work favorably. I
suppose he should have not have published his work because someone
like you held the parochial attitude espoused above. Remenber that
mathematicians once "proved" that bumble bees could not fly.

Let's face it - nobody gives a shit whether you understand something
or not. And thank God for that. If twits ruled the earth we would
still be living in caves.

Bob Knauer

"If you think health care is expensive now, wait until it's FREE!"


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 02:31:53 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 21 Mar 1999 22:08:49 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>> For those who missed those posts, I suggest searching the Usenet
>> archives for key words such as "TRNG", "crypto-graderandom", etc.

>Note that those terms are not used by professional statisticians
>nor practicing cryptologists, and for good reason.

Note that nobody gives a shit whether these so-called "experts" use
these terms or not. If we had to rely on so-called "experts" we would
still be living in caves.


>Kolmogorov, generally speaking, understood statistics quite well.
>Therefore I think you must be taking some point of his out of context
>and misinterpreting it.

Quit using presuppositions - show us where I am taking his comments
out of context.

>*All* statistical tests are of necessity
>applied to finite data sets.  In the hands of a skilled statistician,

The same "skilled statisticians" who flunked Feller's survey.

Hmm....

>such tests *do* work, and work very well.

That's why the statisticians whom Feller surveyed all flunked Random
Walk 101. I am having a hard time believing in these "experts".

  Actual cryptanalysts use
>them continually in the process of cracking real cryptosystems.

Yeah, stupid cryptosystems based on snake oil. I can easily understand
that if someone is stupid enough to use statistical tests to decide
strength of a cryptosystem that cryptanalysts could exploit such
weakness.

For example, let's say that I use a statistical test that screens for
any bias, and use it to reject all sequences that do not have any
bias. In so doing I have handed the cryptanalyist everything he needs
to figure out how I am selecting my keystream. DUH!

>*No* statistical test is based on the frequency interpretation of
>probability; perhaps your *idea* of what a test means might be
>related to such an interpretation.

Statistical test are based on the frequency interpretation of
probability. You need to read the references I have provided.

>Every finite number is "rare" compared to an infinitude.

Then I suppose you advocate only using infinite sequences for
keystreams. Unfortunately that is not possible. Therefore the magical
properties of infinite sequences do not apply to finite crypto-grade
random keystreams.

>As I pointed out in a posting a few weeks back, Pólya long ago
>proved that the probability of an infinitely long random walk
>returning to the origin is 1 for both the 1- and 2-dimensional
>square lattices.  (It's less that 1 for higher dimensions.)

I found that proof in Feller. So what? You have not made your point.
>It might be surprising to some people that it returned to the origin
>as *often* as that.

But not to the statisticians that Feller surveyed. Are you saying that
"some people" know more than the "experts"? Apparently so.

>The only way to have a valid expectation is to
>compute it..  What is the expected number of zero-crossings for
>10,000 steps (when P(L)=P(R)=0.5)?  Unfortunately this is a tricky
>combinatorial problem, but I suspect it has been performed before by
>somebody, who will now chime in with the answer.

Oh, come on - read Feller, fer chrissakes. It's all there.

<plonk>

Bob Knauer

"If you think health care is expensive now, wait until it's FREE!"


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 02:35:21 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 21 Mar 1999 20:21:44 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>Crap.  I defy you to find a single expert who believes statistical
>testing is "worthless" in the evaluation of random number generators.

>Cite someone; ANYONE, other than your own posts.

Where the Hell have you been? I have quoted several mainstream sources
repeatedly. Yet you ignore them, even when I give explicit citations
to exact pages in their works.

Don't give me this bullcrap that I am making this up - I know better
than to do that. I have based all my comments, without a single
exception, on published comments of acknowledged experts.

Just look at the archives.

Bob Knauer

"If you think health care is expensive now, wait until it's FREE!"


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Mon, 22 Mar 1999 02:50:37 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 21 Mar 1999 20:37:28 -0500, "Trevor Jackson, III"
<[EMAIL PROTECTED]> wrote:

>The Rand Tables are not "true random".  Far from it.  They were heavily
>conditioned during and fater the generation process.

Then take that up with Feller.

>> That is only true in the limit of "large numbers". How "large" is
>> "large"? Apparently it is exceedingly large to be meaningful, like
>> infinitely large or nearly infinitely large. How large is that?

>No.  The size of the sample does not invalidate the confidence of the
>result.  In fact, if you inspect the confidence calculation process you
>will find that the sample size and the population size are both used to
>quantify the "representativeness" of the sample with respect to the
>population.
>
>This has nothing to do with "large" numbers, convergence toward the
>mean, or "the law of averages".

Nevertheless the confidence levels are based on probability models
that are only valid in the limit of infinitely large numbers.

Have you ever bothered to read Kolmogorov's critique of this issue?
 
>For both sample sizes you will get a confidence quantity.

Which is only valid in the limit of infinitely large numbers.

>Learn how those are calculated and you will not be uncomfortable with the results.

You learn how those are calculated and you will not be comfortable
with the results.

>> Let me restate this, because I believe it is of fundamental
>> significance. Because you cannot generate true random numbers
>> algortihmically with a classical computer, you cannot contruct
>> algorithmic tests to decide that any given number is truly random - or
>> otherwise you could use that very algorithm to generate true random
>> numbers on a classical computer.

>Yes.  So what?  The question is not how to test for randomness.  There
>is no such test.  The question is how to test for NON-randomness.  That
>is what statistical tests do.

And I am now convinced that such tests are as invalid as tests for
randomness. I gave my reasons in an earlier post. Do you care to
comment on my comments?

>The proof depends on the initial assumptions. (1) that the samples have
>a particular bias, which is known.  (2) the samples are otherwise
>independent.  If you cancel out the bias by appropriate recoding (and
>concomitant loss of "volume") you will be left with a sequence of
>independent, unbiased elements.

1-bit bias is just one possible pattern. The sequence 101010...10 is
free from 1-bit bias. But so what does that tell you?

>If you do not know the bias, or know all of them, it gets harder.  But
>there is no reason to assume that processing the data *imposes* a bias
>upon it.

We went over this earlier - I never claimed that antiskewing "imposes"
a bias. I questioned whether algorithmic antiskewing concentrated
existing k-bit bias which it does not remove.

>> For example, the number may indeed be free from single-bit bias after
>> the operation, but what about other bit-group biases that might have
>> been "distilled" by the operation? What if in the procedure to remove
>> one kind of bias (e.g., 1-bit bias) you produced a higher
>> concentration of k-bit biases?

>You look for them.  If you find them, you remove them.  If you don't
>find them with a suitable tool (statistical test), then they are not
>present.

Where do you end the search?

BTW, the keystream "000...0" is a valid random number since it is
possible to generate it with a TRNG. Would you want to discard it
because it failed to satisfy your prejudiced notions of what
constitutes crypto-grade randomness?

Whoever claimed that lack of regularity was a condition for
crypto-grade randomness. A large fraction of finite sequences have
some kind of regularity, as the random walk exposes. If you discard
those sequences, you have just handed the cryptanalyst a considerable
advantage.

I know, the loss of entropy in discarding 1/2 the possible sequences
is only one bit. That just shows you that the entropy concept as
applied to crypto is suspect - likely for the same reason other
concepts based on infinite sequences do not have validity for finite
sequences.

I remind you once again that you cannot claim that unicorns do not
exist just because the observation of only one unicorn in a herd of
infinitely many horses is statistically insignificant.

Bob Knauer
"If you think health care is expensive now, wait until it's FREE!"


------------------------------

Date: Sun, 21 Mar 1999 11:30:57 -0700
From: Shawn Willden <[EMAIL PROTECTED]>
Subject: Re: PGP Protocol question

[EMAIL PROTECTED] wrote:

> If you are using the PGP SDK you could code it to use the same
> session key for each message but you must *not* have multiple public key
> encrypted session keys. This is the same issue that needs to be addressed
> with BCC's and automated encryption.

Could you explain what you mean by "multiple public key encrypted session
keys"?  Do you mean multiple session keys that are encrypted under different
public keys?  If so, is there some danger in this?  I can see that the same
message would be sent out encrypted under multiple session keys, but I don't
see how that could help an attacker unless IDEA is broken.  Or did you mean
something else entirely?

Shawn.




------------------------------

From: [EMAIL PROTECTED]
Subject: Re: IDEA algorithm
Date: Sun, 21 Mar 1999 18:28:04 GMT


>       Multiplicative inverse modulo 65537 can be calculated by Euclid's
> extended algorithm. See the IDEA code section in AC2.

Thanks I will check it out.

>       In what format? What block chaining mode?

CBC, EBC, PBCB and CFB modes.  When you init a cipher, you can pick the mode.

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

Date: Sun, 21 Mar 1999 20:11:27 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Requesting Opinions, Clarifications and Comments on Schneier's Statements

Herman Rubin wrote:
> 
> In article <7d3b18$rul$[EMAIL PROTECTED]>,
> David A Molnar  <[EMAIL PROTECTED]> wrote:
> >sb5309 <[EMAIL PROTECTED]> wrote:
> >> "A one-time pad might be suitable for a few short messages, but it will
> >> never work for a 1.54 Mbps communication channel".
> 
> >> Questions :
> 
> >> (b) From what I can understand from Mr. Scheier passages that, where
> >> there is heavy traffic, one-time pad is not practical because it is very
> >> expensice to generate and deliver message keys (and that is why one-time
> >> pad is unsuitable for long messages). How does this point relate to
> >> "1.54 Mbps communication channel" ?
> 
> >It becomes much clearer when you find out that "Mbps" stands for
> >"million bits per second." Then we're talking about a stream of
> >1.54 million bits per second every second. By my standards 1.54
> >million bits is a long message. Long enough to make generating
> >a pad for it highly annoying.
> 
> >The HotBits random number generator at http://www.fourmilab.ch/hotbits/
> >generates 30 bytes (240 bits) of padstuff per second. Not nearly enough
> >to keep up with the link, much less save enough to account for transportation
> >time.
> 
> A radioactive generator can run at such speeds; using the parity of
> the number of counts of a type 1 counter with an arrival rate of
> .28 per dead time (yes, that fast; see my technical report) for
> about 12 dead times per bit generated, can come at least close.
> But there still is the problem of getting than many bits across.
> The OTP does not have to get there at the same time as the message.
> 
> But how do you get it there at a comparable rate?

In the mid-80's Microsoft (tm) speculated that the maximum data rate
then available was represented by a 747 ull of CDs.  I suppose this is
an application for that technique.  Since then we've gotten DVD disks
and DLT tapes.  30 medium DLT tapes, at 20 GB each, could last for one
month at T-1 speeds (1.544 Mbps).

  Quantum authentication
> is about the only way it could be done, and it does have the advantage
> that the sender knows which bits are known to the recipient.
> --
> This address is for information only.  I do not claim that these views
> are those of the Statistics Department or of Purdue University.
> Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
> [EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to