Cryptography-Digest Digest #161, Volume #9 Sun, 28 Feb 99 14:13:03 EST
Contents:
Re: Define Randomness (R. Knauer)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Define Randomness
Date: Sun, 28 Feb 1999 18:36:10 GMT
Reply-To: [EMAIL PROTECTED]
On Sun, 28 Feb 1999 07:07:21 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>But I *am* claiming that inductive reasoning is often
>false, unlike deductive reasoning. We thus must be very, very careful
>when we use induction, if we want correct results.
Often? I would grant you "sometimes", but not "often". If it were
often, science would not progress at the rate we see it progressing.
>It is *often* FALSE to draw a conclusion about the whole based upon
>some parts of the whole. This is of course what we do in statistics,
>and may be seen as the reason for speaking in probabilities about the
>results. But probabilities under 1.0 are not PROOF that every other
>possibility has been ruled out. But that is what we want: We are
>trying to rule out ANY POSSIBILITY AT ALL of weakness. At least that
>is what PROVABLY "unbreakable" means to me.
Statistical reasoning is supposedly different from reasoning based on
pac-learning. In the latter you are trying to confirm an hypothesis by
presenting it with more data in the hopes that you will converge on a
result that is trustworthy.
The pac-learning method called Occam's Razor is an example of how this
process works. It is believed that the correct result is the one that
is simplest, as long as it takes full acount of the data. Therefore,
given the data, one begins to remove parts of the hypothesis to see if
they are extraneous.
As Sherlock Holmes said: "You write down all the possibilities and
begin eliminating them. What remains, however unbelievable it may
seem, is the answer to your problem."
I realize that you are claiming that one can never know all the
possibilities, so that method won't work reliably. But I would point
out that the strength of a formal axiomatic system, the kind used to
perform apparently bullet-proof deduction, also suffers from the same
kind of fatal flaw, as Godel-Turing-Chaitin pointed out. Chaitin, for
example, has shown that even in the formal axiomatic system of number
simple arithmetic there are undecideable problems.
So the problem of which you speak, the limitations of knowledge, are
inherent in any reasoning process. I believe induction is much more
powerful than deduction because with it you can step outside the
system - and that is required to have an understanding of reality.
>Suppose we have a house with 20 rooms: After looking 2 rooms, can we
>possibly hope "induce" the contents of the other 18? Of course not.
>In this situation there is no reason for any particular room to
>reflect any other. So conclusions which are based on a small subset
>have no reason to be true. Induction should not be used here.
That is a straw man argument. There was no apriori reason to believe
that there would be any regularity to the rooms. But even then, there
is some regularity. After inspecting a few carefully selected rooms
one could say with a high degree of certainty that all rooms have a
floor, a ceiling and more than three walls, and one or more doors.
But it is true that one cannot say such things with absolute
certainty. But then that is also true of formal axiomatic systems in
arithmetic, as Chaitin has shown.
>Suppose someone in a 20-room house committed murder, and we want to
>find the weapon: Suppose we search 19 rooms without finding it; shall
>we now say that we have "proven" to a 95% probability that the weapon
>is not in the house?
Supposedly you can. That is LaPlace's rule, that the current
probability is simply related to the past instances of some event. For
example, if you take the observation that the Sun has risen every day
for the past 10,000 years, then the probability that it will not rise
tomorrow is about 1/3,650,000 - and therefore the probability that it
will rise tomorrow is 1-1/3,650,000. (See Li & Vitanyi, op. cit).
>I suppose, but of what use is such a "proof"?
>What we really want to know is whether the weapon is there,
>*anywhere*. So as long as even *one* room remains unsearched, there
>is no PROOF that the weapon is not there.
And even if you search all 20 rooms but turn up empty handed does not
mean that you can conclude with absolute certainty that the weapon is
not in the house. You need some other reason to draw that conclusion -
some reason to believe that a weapon cannot escape detection.
>In cryptography we take even unimaginably small risks *very*
>seriously.
You might, and I might, but I suspect that the vast majority of users
of crypto do not have a clue as to the kinds of risks there are. I
consider the development of quantum computation to be a very serious
risk for cryptosystems that are based on classically intractible
problems, like factoring the product of two large prime integers. Yet
there have been posters here on sci.crypt who are convinced that such
systems are completely invulnerable to any kind of attack based on the
impossibility of breaking them classically.
>Even tiny probabilities of weakness are more than we are
>willing to accept in normal ciphers.
There is a point where tiny weakness is no longer a problem on a
practical basis. Also keep in mind that the ability to break a cipher
by exploiting such weaknesses is dependent crucially on the amout of
data at hand. If you are only planning to send one very small message
one time then the requirements on the cryptosystem are greatly reduced
compared to someone who plans on sending huge volumes of data
continually.
>The acceptable probability of
>finding a break is not 1 in 20 (which is more than 1 in 2**5), but
>instead under 1 in 2**56. (Any key can be guessed, but it is clear
>that 1 good key hidden in 2**56 bad ones is not good enough.) We are
>not going to get that kind of probability by statistical induction.
I do not understand the reasoning behind your last statement.
Induction is not applied to brute forcing a cipher. Induction is the
process whereby you attempt to arrive at a valid conclusion based on
the data at hand. In particular, it is used to infer the distribution
from the data based on hypotheses about the distribution.
The amazing thing about Bayesian induction is that one can start with
the uniform distribution and converge on another distribution that is
much closer to the real distribution even if it is not any what
related to the uniform distribution. IOW you can seed the Bayesian
inference scheme with the uniform distribution and still find the pac
("probabily approximately correct") distribution in a small number of
steps. Again, I refer you to Li & Vitanyi for an extensive discussion
of this and other inferential schemes that lead to pac-learning.
>>You are a dogmatist, namely only 100% dogmatic truth will suffice.
>Sorry, name calling is not going to work here.
I did not mean to call you a name. I meant to point out that the
position you are taking is that of a dogmatist. You want what you
believe is "certain truth". Godel-Turing-Chaitin have shown that there
is no such truth, that we can ever know anyway. There are indeed
limits to knowability even in rock-solid formal axiomatic systems like
number theory, even arithmetic.
>Now that you mention it, I *would* like to see Michelson-Morley run at
>astronomic scale in 3D.
Interestingly, the value for the speed of light was different back 50
years ago, when it was measured by several scientists. Then all of a
sudden, it took on a slightly new value outside the error bars, which
we use today.
>>Best stay away from the empirical sciences and even certain parts of
>>mathematics. Chaitin's Omega would be devastating to you, just like
>>Godel's theorem was devastating to Herman Weyl.
>I would say that if you are going to accept hand-waving as PROOF, best
>stay away from cryptography. It will be devastating to you.
I have absolutely no intentions of accepting any hand-waving in
crypto. In fact it has been I who has been one of the more vocal
critics of the hand-waving called statistical tests. In fact I am a
complete heretic when it comes to applying statistics to the ourput of
a RNG, since I consider them to be completely indeterminent. The best
that can be said of them is that they serve as a diagnostic too. But
to claim that a RNG is secure to any level of confidence solely on the
basis of statistical tests of the output is hand-waving at its best.
Yet look at how many people still believe in that snake oil, and
defend it with real passion even in face of arguments which show up
the contradictions inherent in their statements.
The truth is anything someone wants to believe, and that is just as
true in crypto as any other field. So thanks for the warning, but you
are preaching to the choir if they are directed at me.
>You can "maintain" what you like, but let's see the proof: *How* will
>you measure a constructed TRNG to conclude that it has sufficient
>bit-independence, lack of correlations, biases, and any other possible
>weakness?
I have elaborated my program several times before. Briefly it takes on
two major but independent pahses:
Phase One: Design a TRNG around a quantum process that is known to be
truly random, such as radioactive decay. That process can be *PROVEN*
to be truly random both theoretically and experimentally. Audit your
design, both the random subsystem and the classical subsystems.
Subject it to extensive peer review (e.g., publish it in a refereed
journal.) Have built in diagnostics that test the performance of each
subsystem, including the quantum subsystem.
IOW, build a TRNG like a piece of scientific equipment. Use the same
procedures scientists use to certify the performance of their
equipment.
Phase Two: Use the TRNG to create test ciphers that are designed to
expose any "information leakage" which could be used to assist an
attacker in breaking the ciphers. Use pac-learing techniques like
Bayesian inference to attack the ciphers to assign a measure of merit
to the TRNG based on how much information is actually leaked.
>Just how will you measure and bound on every possible
>"weakness," and how will you prove that such bounds are sufficient?
If weaknesses can be put into a heirarchy of importance then you could
quantify their affect in an ordered manner. The sufficiency of the
bounds will depend crucially on the intended usage of the TRNG. Small
message volume places less demand that heavy message volume. As long
as the TRNG results in an insufficient amount of information leaking
for a given message volume, the cipher is proveably secure on a
practical basis.
>Then how will you measure a real device and show that it meets those
>bounds, and will continue to do so?
You will have to run the diagnostics and run the cipher breaking tests
periodically. BTW, one poster suggested to leave large gaps in the
time between runs to shift the environment of the TRNG slightly,
making the attacker have to start all over. For example, as time goes
on the radioisotope will decay, and that will shift the system a small
amount. Or you could remove the radioisotope and later put it back in
the detector chamber in a slightly different position and orientation.
That will tend to randomize any inherent bias and correlation in your
TRNG.
>The problem is not just collecting data to make a statistical
>determination. The problem is that we can never run all the tests.
>That is not an appropriate base for an inductive conclusion.
The prime assumption in my proposed scheme is that one can effectively
categorize the methods of attack in a heirarchy of decreasing
significance for a given message volume. IOW, for a given level of
message volume, each kind of bias and correlation has a measured
effect on the vulnerability of the ciphers, and that vulnerability can
be measured by actually trying to break the ciphers. If you find that
a certain amount of information leaks, but that it is not sufficient
to break the system for that level of message volume, then you have a
proveably secure system on a practical basis - namely the given
message volume.
If you cannot categorize the methods of attack, then the whole thing
is a crap shoot, because the attacker has no systematic methodology to
attack. But I do not see that to be the case. The writeups on
correlation attacks on your web site demonstrate that attacks are
quite systematic in nature, and not just lucky craps shoots.
>Inductive reasoning must be kept under very tight control if it is to
>be correct. Reasoning from a few cases to the whole is often invalid.
That is why message volume is crucially important.
>There has been quite a lot of talk about PROOF here. So how is it
>that I find myself unconvinced? This seems odd, because the whole
>idea of PROOF is that it be *compelling*. But I have yet to see even
>a concise set of statements for what is known, what must be shown, and
>how that will be done.
>Why don't you try to set down this supposed PROOF? Write it out.
>Note the assumptions.
>Show me.
The best that I can do as an Informed Laymen (tm) is what I have
presented thus far. I am not a practitioner in science any more, so I
am unqualified to go beyond my conjectures.
I can tell you that based on my experiences with theoretical and
experimental physics in the area of nuclear gamma ray resonance, that
the statement about the radioactive TRNG is correct to the best of my
recollection. I am even more confident that Pase One above can be
carried out successfully since the state of the art has advanced
considerably since I last was involved. So I claim that Phase One has
been proven. Now it is up to some math genius (which I never was, nor
ever will be) to come up with an analysis for Phase Two.
But this whole program is going to be moot once quantum crypto becomes
widespread. Quantum crypto does not suffer from the kinds of problems
we have in classical crypto. It is proveably secure right out of the
box, so there is no need to go thru all the complex rigmarole that I
allude to above. Quantum computers are known to generate true random
numbers, and that ends that.
>>The essence of all this is that if only an insignificant amount of
>>information leaks from a cipher, then it cannot be broken as a
>>practical matter. It does not have to be 100% unbreakable to be
>>unbreakable.
>
>You mean "almost unbreakable" is the same as "unbreakable"?
I mean "almost unbreakable" is the same as "practically unbreakable".
If the level of information leakage for a given level of message
volume is insufficient to break the cipher, then it is proveably
secure on a practical basis. That is, the attacker cannot break it no
matter what he tries to do.
>I suppose
>that you, like the Red Queen, intend to be the master of your words
>instead of the other way 'round. Alas, I have a single definition for
>"unbreakable," and it is "cannot be broken."
There is such a thing as "practically unbreakable". All engineering
design conforms to the notion of "probably approximately correct".
>In this context, "broken" means an exposure beyond the security
>claimed for the cipher. So if you don't claim much security, I
>suppose it would never be "broken." But that was not my understanding
>of your goals.
>If you want to claim PROVEN, ABSOLUTE security as in "the only secure
>cipher," then "unbreakable" becomes very clear and understandable. To
>be "unbreakable" it must *indeed* be "100% unbreakable." Anything
>other than this you need to qualify and quantify.
I fundamentally disagree with that statement. It is too dogmatic to be
correct.
I can have a piece of material like a rod made of a certain kind of
steel and I can certify that it is unbreakable in a given environment
even though I cannot prove it in the ideal extreme.
Just because I cannot prove it in the ideal extreme does not mean that
my proof is any less certain. The same can be said of the security of
the OTP system with a TRNG. If I can show that the system is not
breakable up to a certain level of stress, then it is proveably secure
to that level, even though it is vulnerable beyond that level of
stress.
>If you had some way to measure that at most 5% of a message might be
>revealed, at least you would have a PROOF of *something*.
>But of what? Can we ever call a 5% exposure acceptable? Is this the
>same vaunted "unbreakable" cipher that we started with? In what way
>can we say "unbreakable" yet accept that it may be broken? What does
>this sort of "unbreakable" mean?
I take the term "breakable" to mean that the attacker knows a
sufficient amount about your key to be able to feel quite certain that
he can decrypt your ciphers. If that means he has to decipher the
entire message with 100% certainty, then you are not going to have to
worry if he can only decipher 50% of the message.
But it does not work that way with the OTP system. Every possible
intelligible message is capable of being deciphered from any given
cipher. Therefore for there to be any hope of ever deciphering an OTP
cipher at all, there must be some kind of systematic attack that
produces pac-learning about the weaknesses in the TRNG. The days of
Yardly and his human intuition are over - it's all computers and
algorithms running pac-learning schemes, or others which are known to
uncover patterns.
You tell me how successful an attacker needs to be before he can claim
that he has uncovered your messages reliably enough to stop the
decryption process, and slap the message on his boss's desk, risking
his reputation on that event.
>I suppose it may be much easier to try to re-define "unbreakable" than
>to accept that the "unbreakable" cipher is out of reach. But I think
>I'll just use the old definition.
Your definition ask for something that is impossible. That's why
engineers build bridges and not mathematicians. If mathematicians ever
tried to do anything practical, nothing would ever happen in physical
reality.
That's why God is an engineer and not a mathematician, at least with
regard to the act of creation.
>Some practical RNG's do exist with internal states far in excess of
>most message sizes: I have built them. For these, we can show that
>every possible sequence (of some message length) *can* be produced by
>the RNG. It is only when we use "small" keys that this property is
>not retained, and that is because there can be at most as many
>sequences as there are keys. Normally we expect that the key hashing
>be of such a nature that no useful bias occurs in selecting the subset
>of possible sequences.
Bias is just half of the picture, albeit a very important half.
Correlation is the other half, and is also of considerable importance.
>So the advantage of the TRNG is not really that it can produce every
>possible sequence.
I fundamentally disagree. If I know that your TRNG is not producing
certain sequences I can use that fact to attack your ciphers. Several
posters here on sci.crypt have given examples of how you introduce a
serious vulnerability when that is the case.
>(For example, we could define a PRNG construction
>which would grow until had more internal state than any message bound,
>although we then have to key it.)
The strength of a RNG is only as big as the key.
> Instead, the advantage of the TRNG
>is that there *is* no internal state, so there is nothing there *to*
>know. There is nothing to expose. There is nothing to find. There
>is no short key.
That is true of an ideal TRNG. In the real world there is no such
thing as an ideal TRNG. But that does not mean that one cannot
contruct a TRNG that is proveably secure for a given level of message
traffic. It all depends on how much information leakage the attacker
needs to make the decision that he has broken your ciphers proveably,
or at least enough to cause him to act on what he thinks is in your
messages.
If you, the attacker, find three intelligible messages in my cipher:
1) ATTACK AT 23:00
2) DO NOT ATTACK
and you have no way to decide which one is the actual message, then
you are not gonna send out the nukes in anticipation.
On the other hand, if you have a *sufficient* reason to know that
message #1 is the intended message, then you will prepare accordingly.
What is it that causes you to know that #1 is the intended message?
Whatever it is, you can use that to test your ciphers built by your
TRNG and see if they can withstand such an attack. If they can
withstand such an attack, then they are proveably secure.
>Does not being "100% provably secure" mean that we can't PROVE it to
>100% levels, or that it is known to be a few percent insecure?
>Normally, "a little bit insecure" is like "a little bit pregnant."
>And how is this measured?
See above.
One thing that should be apparent here is that the security of an OTP
cryptosystem is based on what causes the attacker to make a binary
decision that he has finally broken your ciphers. Up to some point in
his attack, he does not have sufficient reason to make that decision,
and at some point he does. You tell me what that criterion is.
>The problem with shipping keystream on a CD every month is that we
>have the most critical information for an entire month in *two* places
>(assuming the cipher is for communications). Now we have to guard
>*both* places, *all* the time. We have to pay the guards very well
>and threaten them with death, because the true strength of the OTP
>depends upon not just the data, but also the guards.
That is a serious flaw of the OTP system, but it is not an analytical
vulnerability. IOW, the fact that the keys can be stolen is not the
result of a faulty TRNG.
>>One poster has repeatedly suggested shutting the TRNG down
>>deliberately to reset its environment and average out any slight
>>imbalances in its behavior.
>That would be Dr. Rubin.
I believe he is.
>I agree that combining multiple sequences (throwing information away,
>as we do in any sort of post-process hash) can hide the patterns we
>fear may exist. (I am less sure that we must do this day-by-day or
>shut down first.) I see XOR as the minimum possible CRC.
Can you prove that? IOW, can you be 100% certain in all instances that
such post-processing is secure.
>If we had some quantitative measure of the quality of the TRNG
>sequence, I think we could make some statements about the quality of
>resulting hashed sequences. But I don't think we have that measure,
>or at least not one which we can use in practice.
Test your proposed method by creating test ciphers design to leak the
most information possible. Subject these test ciphers to the kinds of
systematic attack done with computers, like in research laboratories.
See if they leak a sufficient amount of information that will cause a
decision to be made that your system has been broken.
>Fine. Define a measure of overall "strength" which applies to any
>possible attack. Quantify the amount of strength we need. Conduct
>quantitative experiments on equipment to show how much strength that
>equipment has. Show the equipment is within bounds. Then prove those
>results will apply to future operation. No problem.
>Show me.
See my comments on this above.
>I think you want the ultimate cryptographic goal -- The Unbreakable
>Cipher.
That would be a Quantum Cryptosystem.
>But you don't want to discuss actually using this cipher,
>because the very thing which you would rely upon to make it
>"unbreakable" is also what makes it risky and impractical for most use
>-- which is to say, "breakable."
If you, the attacker, cannot find enough information leaking from my
ciphers to cause you to decide that you have successfully broken my
system, then it is proveably secure up to that point. If I undertake
to do the same thing you do except in an environment that stresses my
system more than my intended use, and it is still not breakable, then
I have proven that it unbreakable on a practical basis.
>>I have, however, as have others here, suggested using other methods
>>than a hardware Trng to generate keystreams, which circumvent the
>>problems with orthodox OTP ciphers.
>
>Unfortunately, many of the problems with orthodox OTP ciphers are not
>how we generate the running key, but the fact that we have a key which
>is as large as the message, and which must be both transmitted to the
>other end and held in complete secrecy.
I am addressing only analytical attacks. Protocol attacks are another
issue.
But there is one possible way to circumvent key distribution attacks,
which I and others have mentioned more than once. One of them involves
using text streams which are heavily processed to distill the
randomness to the level of a truely random keystream. The exact text
stream selected for any given message could be decided based on a
memorized code book, or decided based on information that is timely
and widely published, like the least significant digits of certain
market closing values.
The challenge here is to demonstrate that the resulting keystream is
satisfactory to prevent the ciphers from leaking too much information.
The only way I know how one might approach such a task is contained in
Phase Two above, namely employing the very same methods used to break
stream ciphers.
>I think a hash of text can be made to have full "entropy" in practice,
>if we can select from a wide enough body of text. (But if the text is
>known, and the hash is known, the result will be known, which is
>precisely what we wish to avoid.)
I question if entropy is an adequate measure of true randomness fot
crypto. It is no more than another statistical measure of *apparent*
randomness, namely another pseudo-randomness test.
>But this is hardly the same as defining security limits, measuring
>closeness to those limits, and thus claiming security on the basis of
>measured values. We do not have those measures. We do not have that
>proof.
We do have systematic methods to break stream ciphers, and they can
serve as the measures of vulnerability.
>>I am proposing that one can characterize a TRNG by using inductive
>>methods to attempt to break test ciphers produced by that TRNG.
>And I am proposing that you try it.
>Show me.
I wish I could. Maybe in my next reincarnation. :-)
>One *cannot* in a general sense "quantify the amount of information
>that leaks." The best one can do is to show probability reductions on
>the basis of particular message probability. (Even this would use
>some statistical approximation to the TRNG, which may not be accurate
>at message size or future time.) At best, it makes sense to talk
>about specific results for specific messages.
Whatever the case, there is some set of systematic criteria that
causes the machine or the human to decide that the cipher has been
successfully broken. Before those criteria are met, the cipher is
proveably secure, that is there is no way to know which possible
decrypted message is the intended one.
>But if the results depend upon the message, in what way can we hope to
>apply them to a system which can accept *any* message?
By chosing worst case messages as the test messages. Since you accept
the fact that some messages are more vulnerable to attack than others,
you should be able to categorize those messages in a heirarchy which
permits you to select the worst messages to use for your test ciphers.
>But suppose we can: What *you* want to do is to use strength data
>from particular cases to imply results about the whole, but -- even if
>we could measure strength -- it is useless unless we can reach
>probabilities like 1 in 2**56. This should require 2**112
>measurements, which is impossible -- and all this depends upon having
>such a measure which we do not.
The attacker does not need anywhere near that much certainty to
declare that your cipher has been broken, or he would never be able ot
break them. So you can use his methodology to test your system.
>And where is the probability that a message was weak but this was not
>found by a cryptographer, and so the message counted in error as
>"unbreakable"? Can we reduce *this* below 1 in 2**56?
I am not agreeing to this analysis, mainly because I do not think it
applies. But I could easily be mistaken, so if you think I am please
elaborate with examples. I do not believe that you have to make your
system secure to the level you claim, any more than I think that the
attacker has to go to that level of effort to break the system. Of
course I am talking about the OTP system, and not any others. The
prveable security of the OTP system rests fundamentally on the fact
that the key is at least as large as the message.
BTW, your hashing schemes are ways to apply a weaker key that is
considerably larger than the message, to the point where the resulting
cipher is just as random as though it were built from a truly random
key of the same length as the message.
Can you quantify that procedure in such a way that one could use it
for a particular hash and a particular keystream? If you can't, then
how do you know it is 100% secure?
>Junction breakdown is a like a continuous sequence of little
>avalanches. This is an advantage over detectors which perform a
>complete avalanche breakdown and then must charge up to fire again.
>Junction breakdown is a substantially higher level signal than
>resistive Johnson noise, which itself is large enough to be reliably
>detected and used.
The only comment I can make is that a solid state system is highly
correlated even in its quantum behavior. Otherwise such things as the
Brillouin Zone could not be determined.
With radioactive decay, every nucleus is completely independent of
every other one. I do not know if that makes much difference on a
practical basis, as long as the process is quantum mechanical in
nature.
But soon we will have real quantum computers which will be able to
generate true random numbers. That will be the Perfect TRNG.
>And what would you describe as "classical chaos"? Do you disagree
>when we say that the ping-pong ball systems used in lotteries are
>unpredictable?
I see no reasson to claim otherwise. All I am saying is that classical
chaotic systems are not sources of true randomness. If you can test
them as suggested in Phase Two above, then they would be proveably
secure on a practical basis.
The Lavarand system at SGI is based on a classical chaotic process in
a Lava Lamp. Yet Lavarand hashes the raw output. Can you explain why
the designers thought that was necessary if the chaotic process itself
was random?
>But your theoretical advantage does not necessarily extend into and
>through the detector. Can you *show* that the act of detecting the
>decay byproduct is *also* "completely independent"? Can you PROVE it?
I once did just that. You use your system to produce the Mossbauer
spectrum under carefully controlled conditions that can be done to
within an arbitrarily small error. The usual system was the 14.4 kev
K-capture system of Co-57/Fe-57. At low temperatures where the
Debye-Waller factor is largest, there is a very clean Mossbauer
spectrum available, one that can be tested for Lorentzian line shape
using non-linear regression technoques. If you provide enough data,
you can get the overall test to prove experimentally that the decay is
truly random to an incredible level of precision.
In fact that system is so sensitive that one can actually measure the
change in photon energy as the photons climb a gravitational potential
(Pound's experiments), and also in an inertial environment. In the
gravitational environment the source of radioactive decay was placed
on the first floor and the detector on a higher floor and the
Mossbauer spectrum was measured. The energy difference was attributed
to the effect of gravity on the photon - and it agreed with general
relativity exactly. In the inertial environment the source was at the
center of a centrifuge and the detector (actually the Mossbauer
absorber) was at the periphery of the centrifuge. The change in energy
was attributed to inertial effects and agreed with eneral relativity
exactly.
So here you have this incredibly precise probe that can be used to
demonstrate the true randomness of radioactive decay.
>>>>Both are proveably secure when implemented properly.
>>
>>>There is no such proof, either for the OTP, or a radioactive TRNG.
>>
>>Not 100% dogmatic proof, but there is proof available.
>
>Instead of "dogmatic," I prefer "deductive." As in Science.
In the first place even formal axiomatic systems are not as correct as
they were once thought to be. Secondly, not all of science is about
formally deductive results. Physics is an empirical science which
means that it is largely inductive when it is acquiring new axioms.
There was a scientist who once tried to do only deduction, as an
experiment. He got nowhere. You must introduce new axiomatic material
into science or it will go nowhere. That requires inductive
techniques, like pac-learning. In fact, one of the most famous methods
of pac-learning, Occam's Razor, is the byword for the physical
sciences
>Let's see how the analogy plays out:
>
>First, for a wheel, we can identify the requirements for a particular
>application. Then we can understand "imperfection," and so quantify
>the implications imperfection would have on the application (e.g., how
>much bounce can we accept). We can then place bounds on how correct
>each wheel must be. Then we can actually measure wheels to see if
>they are acceptable or not.
>*None* of this is possible with the TRNG:
I do not see why not. The goal of cryptanalysis is to break your
ciphers, so you can try to break your own carefully prepared test
ciphers and see if they can withstand the attacks. If they do, then
they are proveably secure, just like the wheel you discussed above is
proveably round for the given application on a practical basis.
>We first are unable to say
>how much imperfection is allowed, not the least because we cannot
>define or measure "imperfection."
I am not following you. Imperfection can be quantified, otherwise the
attacker could not know when to decide that he has successfully broken
you cipher. So you use those criteria to test your ciphers.
>So we cannot place bounds on the imperfection.
Then how does an attacker know when he has successfully broken your
OTP cipher?
If you are gonna claim that he doesn't, then the process of breaking
OTP ciphers becomes a crap shoot, and the concept of security goes out
the window. IOW, under those conditions, the concept of security is
not even defined.
> And beyond having no defined measure, we cannot
>possibly try in practice all the ways something can go wrong.
If that were the case, then the attacker would not be able to break
your ciphers at all, since he would not have any way to know that he
has been successful. For him to be successful at all, there must be a
heirarchy of weaknesses, each one a little more secure than the
previous one for a given message volume. Knowing that, you can apply
those criteria to your test ciphers which are designed to expose such
weaknesses maximally.
>The issue for you is to make the TRNG production process just as
>tractable as wheel-making. When you can, then you will have a
>reasonable analogy.
I claim that the design and implementation of a radioactive TRNG can
be done that way. See Phase One above.
>>I see no reason in principle that the same general scheme cannot be
>>applied to proving the security of stream ciphers, once the proper
>>hypotheses can be formulated and the appropriate test ciphers
>>constructed.
>Then you need to look again.
Look where? I am basing my claim on the presumed systematic methods of
attacking OTP ciphers. If all they are is a big crap shoot, then I
challenge you to define crypto-grade security in the first place.
>One reason the inductive method does not work is that the measures you
>propose for the TRNG depend upon specific plaintexts. Unless you
>restrict the plaintexts for the system to be "similar" to those, your
>results cannot be correctly extrapolated to the whole. I doubt we can
>quantitatively know what "similar" is, or what the bounds for such
>similarity might be.
I believe that there is a heirarchy to plaintexts that rank them as
minimally secure all the way to maximally secure. If you want to
propose entropy as one way to create that heirarchy, then fine.
If a text message is near random, then the cipher produced from it
with a presumably random keystream should leak no information. If on
the other hand, the text message is maximally regular, one would
expect that it would leak the most information when converted into a
test cipher.
That would constitute the heirarchy for test messages, so if your
keystream leaks an insignificant amount of information with worst case
test ciphers, what makes you think that other messages will leak any
more information than that?
>>I am not capable of carrying out that program, since I am merely an
>>Informed Layman (tm). But surely some crypto genius has already done
>>it and is currently pulling down a cushy 6 figures salary at the NSA.
>>Or perhaps he is pulling down a 7 figures salary at some huge
>>multinational corporation, like a worldwide oil company, that has to
>>have proveable security. Whatever.
>
>What you want is simply not possible. Of *course* you can't do it:
>*Nobody* is pulling down *that* salary.
HUH?! Where have you been, dude? The average salary for a staff member
at Bell Labs is small six-figures.
As far as 7 figures, that is not at all outside the realm of the
possible, if such a person sold his skills to the highest bidder. We
pay complete morons at least that much to run large corporations,
whose only skill is that they have some politician in their hip
pocket.
If you had a skill that was more important than that, it would be
worth more than that - unless you are completely stupid and undervalue
your skill in the market. Since industrial espionage is the new
paradigm for man's unceasing aggression, I expect there are at least a
few crypto geniuses out there making 7 figures helping to wage that
war. Russia and China would still be in the stone age if they had not
implemented vast spy networks to steal our technology.
>>The single most important test is the Unbreakability Test. Certainly
>>techniques for breaking stream ciphers are known and can be
>>categorized formally into a heiracrchy that is reasonably complete.
>That is blatantly false.
Saying so is not good enough. You have to provide reasons for why you
believe what you assert. Sorry, as acomplished as you are in crypto,
much more so than the ordinary run of the mill in crypto, I still must
require you to provide reasons for such an assertion.
But if you are correct, and one cannot characterize methods for
breaking stream ciphers, then either cryptanalysis is a crap shoot, or
crypto-grade security cannot be defined.
Which is it? Is cryptanalysis an indeterminant process or is
crypto-grade security an ill formed concept?
>>IOW, the known list of ways to break stream ciphers is reasonably
>>exhaustive.
>No.
Then how do OTP ciphers get broken by computers?
Remenber that all my remarks have been limited to a discussion of OTP
ciphers which in principle are proveably secure.
>>If so, then these ways to break stream ciphers could be
>>cast into hypotheses which can be measured for their "probable
>>approximate correctness" using computational induction technique.
>And if not...?
Then cyptanalysis is nondeterministic and crypto-grade security is
meaningless.
>Please. It is *your* "proof." It is up to *you* to prove "with
>certainty" that all possible attacks *have* been discovered.
I base my statement on the fact that computers are used in
cryptanalysis and they need a heirarchy of known attacks to be
successful - and that cryptographers know them.
You seem to be arguing that there could be an attack that no one knows
about today, but a few people discover tomorrow which makes your
ciphers vulnerable. I need to see your reasons for saying that.
I am claiming that there are no such hidden variables. I am claiming
that all the important ways to attack stream ciphers have been
discovered. Unless you have strong reasons to believe otherwise, you
are just erecting a straw man.
>(Since we do in fact see a continuing flow of new attacks, we might
>reasonably suspect that there are many more out there as yet unknown.
But do we see such a "continuing flow of new attacks" for crypto-grade
stream ciphers?
>Indeed, since attacks can have arbitrary complexity, there should be a
>wide array of ever-more-complex attacks yet to be known. With respect
>to the simple OTP, of course, they all come down to finding patterns
>in the sequence. But there are more ways to look for patterns than
>there are patterns.)
I am claiming that patterns can be placed in a heirarchy with respect
to the amount of information they cause an OTP cipher to leak.
You seem to be saying that there is the Magic Bullet pattern out there
just waiting to be discovered. I have no reason to believe that there
is such a pattern. I believe that all important patterns have been
found - for breaking the OTP system anyway.
Now that does not mean that quantum computers will not implement
uncomputable pattern recognition schemes. But for classical crypto, I
believe that all algorithmic pattern recognition schemes have been
discovered that cause OTP ciphers to be broken.
If you believe otherwise, then the burden of prove is on you to show
us one.
>Yes, it's "simple enough" if you have a total misconception of what
>cryptanalysis is.
In what way is my notion of cryptanalysis a "total misconception"? I
am assuming that modern cryptanalysis is done on computers.
>Just toss the cipher over the fence and get back strength results?
>Then extrapolate those results to prove "strength" for any possible
>message? I don't think so.
I never claimed any such extrapolation. I believe you must test to the
level of intended message volume, and that you must use worst case
tests. There is no extrapolation in that.
>The problem is not just in finding those people or convincing them to
>work for you,
which is completely irrelevant to this discussion.
>the problem is the actual testing itself: I believe it
>is *impossible* to prove what you need by testing.
Then show us why you believe it is impossible. All you have done thus
far is a lot of hand-waving. I, by contrast, have proposed concrete
programs at their initial stage, in Phase One and Phase Two above.
There is such a thing as radioactive experimental and such a thing as
pac-learning. Now we hear that there is such a thing as Hidden Markov
Models which are able to do wonderful things with speech recognition,
etc. If these methods can be used to break ciphers, they can be used
to test ciphers.
>Do you think the guys who supposedly have this capability cannot
>comprehend "proof by induction," and so are unaware that they have the
>ability to certify a provably unbreakable cipher? If that capability
>existed, we would be hearing about those proofs. We are not.
Are you sure that just because you are not hearing about them
automatically means that they do not exist?
You seem to be talking out of both sides of your mouth. When it is
convenient, you allude to the fact that there cannot be something
since you would have heard about it. Then you allude to the fact that
there cannot be something because you cannot possible know everything
there is to know.
Please, which is it?
>I normally assume one bit per character, which is more conservative
>than usual. But there can be no such formula, and no fixed values.
I assume that is for an 8-bit extended ASCII character. I thought I
saw something like 1.3 bits of entropy per 8-bit character.
>The whole concept of "entropy" has major problems in practice: There
>*is* and *can*be* no one correct result. Everything depends upon
>context, and in reality we have different, partly-known, and dynamic
>contexts. Everything also depends upon our model of the possibilities
>(the "universe"): The better language results come from actual
>experiments on how people use language. But we don't know yet how
>that works. We don't know the model of language.
Can't you make reasonable assumptions and get a range, and then use
the worst case for making design decisions? After all, that is what is
implied in your statement above where you say that you use 1 bit of
entropy to play it safe. That is an engineering judgement based on
some knowledge of the range of values, otherwise you are just blowing
smoke (which I do not believe to be the case).
>Suppose we wish to send a phrase as an yes/no answer. Suppose there
>are about 5 different phrases we might use to convey the same result.
>Is the entropy concerned with 1 out of 5, or 1 out of 2? Or is it
>the entropy of the letters, in the context of all messages, or this
>particular message? Character-level-entropy does not tell us about
>word-level-entropy, sentence-level-entropy, or meaning-level-entropy.
>All of this exists only in context.
I assume that text entropy was based on frequency tables for various
kinds of text, since entropy is related to probabilities that are
calculated from such frequency measurements.
>While I believe Bayesian analysis can be useful, I also believe you
>are expecting too much from it. It simply cannot do what you assume
>it should. So bring out the whips and force it into line! But all
>the whips you have are not going to make it do what it cannot do.
Then why even bring it up in the first place?
>>I am unqualified to carry out that program. Maybe in another
>>reincarnation. But I suspect that the math geniuses at the NSA have
>>already figured that out.
>I suspect differently.
You mean all that tax money is being wasted on another Big Govt
boondoggle? We're doomed then.
>>>OK, you start measuring and calculating. Start with experimental
>>>PRNG's which can be controlled. Let me know.
>>I wish I could.
>You could *start*. And if you *did* start, you would have to confront
>exactly the issues I have described and more. Just *starting* the
>process of seriously trying an analysis should be much more
>illuminating than continually arguing that you really are right.
I am not "continually arguing that I am right". I have stated out
front that my sole purpose here is to stimulate discussion that will
prove to be illuminating.
I will leave it to the experts to carry out such a program, or to tell
us that they already have and here are the results.
Remember that my assumption is that modern cryptoanalysis is performed
on massive classical computers. That means that the procedures,
deterministic or nondeterministic, must be algorithmic, in which case
the can be used to test a proposed cryptosystem with a reasonable
expectation that the test results can be believed in the same way that
the results of the cryptanalytic attack can be reasonable believed.
IOW, I am effectively "inverting" the attack procedure to use it to
test the cryptosystem.
Bob Knauer
"If you want to build a robust universe, one that will never go wrong, then
you don't want to build it like a clock, for the smallest bit of grit will
cause it to go awry. However, if things at the base are utterly random, nothing
can make them more disordered. Complete randomness at the heart of things is the
most stable situation imaginable - a divinely clever way to build a universe."
-- Heinz Pagels
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************