Cryptography-Digest Digest #159, Volume #9 Sun, 28 Feb 99 10:13:18 EST
Contents:
Re: Define Randomness ("Trevor Jackson, III")
DEMYC's resolution about WASSENAAR arrangement (Ville O S Oksanen)
----------------------------------------------------------------------------
Date: Sun, 28 Feb 1999 09:41:44 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Define Randomness
Terry Ritter wrote:
> On Fri, 26 Feb 1999 15:48:34 GMT, in
> <[EMAIL PROTECTED]>, in sci.crypt
> [EMAIL PROTECTED] (R. Knauer) wrote:
>
> >[...]
> >Are you claiming that all inductive learning is not worthy of being
> >called a PROOF?
>
> No, I am not. But I *am* claiming that inductive reasoning is often
> false, unlike deductive reasoning. We thus must be very, very careful
> when we use induction, if we want correct results.
>
> It is *often* FALSE to draw a conclusion about the whole based upon
> some parts of the whole. This is of course what we do in statistics,
> and may be seen as the reason for speaking in probabilities about the
> results. But probabilities under 1.0 are not PROOF that every other
> possibility has been ruled out. But that is what we want: We are
> trying to rule out ANY POSSIBILITY AT ALL of weakness. At least that
> is what PROVABLY "unbreakable" means to me.
>
> Suppose we have a house with 20 rooms: After looking 2 rooms, can we
> possibly hope "induce" the contents of the other 18? Of course not.
> In this situation there is no reason for any particular room to
> reflect any other. So conclusions which are based on a small subset
> have no reason to be true. Induction should not be used here.
>
> Suppose someone in a 20-room house committed murder, and we want to
> find the weapon: Suppose we search 19 rooms without finding it; shall
> we now say that we have "proven" to a 95% probability that the weapon
> is not in the house? I suppose, but of what use is such a "proof"?
> What we really want to know is whether the weapon is there,
> *anywhere*. So as long as even *one* room remains unsearched, there
> is no PROOF that the weapon is not there.
>
> In cryptography we take even unimaginably small risks *very*
> seriously. Even tiny probabilities of weakness are more than we are
> willing to accept in normal ciphers. The acceptable probability of
> finding a break is not 1 in 20 (which is more than 1 in 2**5), but
> instead under 1 in 2**56. (Any key can be guessed, but it is clear
> that 1 good key hidden in 2**56 bad ones is not good enough.) We are
> not going to get that kind of probability by statistical induction.
>
> >>I think "proof" means PROOF: a 100% iron-clad demonstration that no
> >>other possibilities exist. This is *not* the same as proof *if* only
> >>something else is what we hope and wish it to be.
> >
> >You are a dogmatist, namely only 100% dogmatic truth will suffice.
>
> Sorry, name calling is not going to work here.
>
> >You are excluding proofs arrived at by induction, such as recursively
> >constructed proofs and experimental proofs. For example, in your
> >system you could never accept that the speed of light is a constant,
> >since there is no a priori reason to prove that it is.
>
> Now that you mention it, I *would* like to see Michelson-Morley run at
> astronomic scale in 3D.
>
> >Never mind that there are very strong reasons, inductive reasons that
> >come from both theory and experiment, that the speed of light is a
> >constant, at least in a given locality in spacetime. Never mind that
> >the constancy of the speed of light is contained in Maxwell's
> >equations and is contained in the various measurements of the speed.
> >
> >For you the proof is not 100% iron clad, since there is always the
> >*possibility* that the speed of light is not a constant.
> >
> >Best stay away from the empirical sciences and even certain parts of
> >mathematics. Chaitin's Omega would be devastating to you, just like
> >Godel's theorem was devastating to Herman Weyl.
>
> I would say that if you are going to accept hand-waving as PROOF, best
> stay away from cryptography. It will be devastating to you.
>
> >>No. OTP systems are provably secure only in concept.
> >
> >I am maintaining that OTP systems can be made proveably secure to
> >within an arbitrarily small amount of unsecurity. I am basing that
> >claim on the existence of nearly perfect TRNGs and the existence of
> >induction method(s) that can give one a "probabily approximately
> >correct" result regarding that unsecurity.
>
> You can "maintain" what you like, but let's see the proof: *How* will
> you measure a constructed TRNG to conclude that it has sufficient
> bit-independence, lack of correlations, biases, and any other possible
> weakness? Just how will you measure and bound on every possible
> "weakness," and how will you prove that such bounds are sufficient?
> Then how will you measure a real device and show that it meets those
> bounds, and will continue to do so?
>
> The problem is not just collecting data to make a statistical
> determination. The problem is that we can never run all the tests.
> That is not an appropriate base for an inductive conclusion.
There is more to this issue. On the conceptual level we use theories.
These are subject to proof. On the practical level we use physical
devices. These are not subject to proof, but only to inspection with
respect to the theory on which the design of the decice is based.
Proof is irrelevant to physcial devices because proofs operate in the realm
of math and logic and physical devices do not exist in that space.
Experiment is irrelevant to theoretical proofs. Note that experimental
evidence may lead us to have more or less confidence in a theory, but it is
not proof.
>
>
> >I have offered the radioactive TRNG as the nearly perfect TRNG, based
> >on the proveable randomness of radioactive decay. I have offered a
> >computational induction method as the way to determine the extent to
> >which the OTP system can be said to be unsecure.
>
> >>The *provably* secure OTP is a *theoretical* OTP -- and that is only
> >>good for sending theoretical data.
> >
> >It is proveably secure on a practical level if you will accept
> >inductive proof as legitimate.
>
> Inductive reasoning must be kept under very tight control if it is to
> be correct. Reasoning from a few cases to the whole is often invalid.
>
> There has been quite a lot of talk about PROOF here. So how is it
> that I find myself unconvinced? This seems odd, because the whole
> idea of PROOF is that it be *compelling*. But I have yet to see even
> a concise set of statements for what is known, what must be shown, and
> how that will be done.
>
> Why don't you try to set down this supposed PROOF? Write it out.
> Note the assumptions.
>
> Show me.
>
> >The essence of all this is that if only an insignificant amount of
> >information leaks from a cipher, then it cannot be broken as a
> >practical matter. It does not have to be 100% unbreakable to be
> >unbreakable.
>
> You mean "almost unbreakable" is the same as "unbreakable"? I suppose
> that you, like the Red Queen, intend to be the master of your words
> instead of the other way 'round. Alas, I have a single definition for
> "unbreakable," and it is "cannot be broken."
>
> In this context, "broken" means an exposure beyond the security
> claimed for the cipher. So if you don't claim much security, I
> suppose it would never be "broken." But that was not my understanding
> of your goals.
>
> If you want to claim PROVEN, ABSOLUTE security as in "the only secure
> cipher," then "unbreakable" becomes very clear and understandable. To
> be "unbreakable" it must *indeed* be "100% unbreakable." Anything
> other than this you need to qualify and quantify.
>
> >>In practice, OTP's depend upon TRNG qualities which cannot be ASSUMED
> >>and also cannot be PROVEN.
> >
> >You are correct, as far as your statement goes - with the word PROVEN
> >meaning 100% proven. But so what? If I can send OTP ciphers that are
> >95% proven secure by inductive methods, then how can you claim that
> >you have broken my OTP ciphers?
>
> If you had some way to measure that at most 5% of a message might be
> revealed, at least you would have a PROOF of *something*.
>
> But of what? Can we ever call a 5% exposure acceptable? Is this the
> same vaunted "unbreakable" cipher that we started with? In what way
> can we say "unbreakable" yet accept that it may be broken? What does
> this sort of "unbreakable" mean?
>
> I suppose it may be much easier to try to re-define "unbreakable" than
> to accept that the "unbreakable" cipher is out of reach. But I think
> I'll just use the old definition.
>
> >The OTP system relies on several things for its proveable security,
> >and one is that the key be as long as the message. That means that all
> >possible intelligible messages can be decrypted by brute force. That
> >is not the case with seeded PRNGs, where once the message gets
> >significanlty longer than the unicity length, there is a vanishingly
> >small probability that brute force will uncover more than one
> >intelligible message.
>
> Some practical RNG's do exist with internal states far in excess of
> most message sizes: I have built them. For these, we can show that
> every possible sequence (of some message length) *can* be produced by
> the RNG. It is only when we use "small" keys that this property is
> not retained, and that is because there can be at most as many
> sequences as there are keys. Normally we expect that the key hashing
> be of such a nature that no useful bias occurs in selecting the subset
> of possible sequences.
>
> So the advantage of the TRNG is not really that it can produce every
> possible sequence. (For example, we could define a PRNG construction
> which would grow until had more internal state than any message bound,
> although we then have to key it.) Instead, the advantage of the TRNG
> is that there *is* no internal state, so there is nothing there *to*
> know. There is nothing to expose. There is nothing to find. There
> is no short key.
>
> >Most of the security of the OTP is contained in how it is constructed.
> >If you send only a few bits of cipher, the requirements for the
> >proveable security of the TRNG are greatly reduced. As you send more
> >cipher bits the TRNG must be more secure. But unless you plan on
> >encrytping an inordinate amount of data, the TRNG doesn't have to be
> >100% proveably secure.
>
> Does not being "100% provably secure" mean that we can't PROVE it to
> 100% levels, or that it is known to be a few percent insecure?
> Normally, "a little bit insecure" is like "a little bit pregnant."
>
> And how is this measured?
A key question. If we are not measuring in a meaningful way we are not
working in the realm of Science but in that of Art. In Art much weight is
given to style even at the expense of substance. Many of these
angles-dancing-on-a-pinhead arguments are stylistic rather than substansive
One essential definition is missing in this discussion: insecurity. We know
what 100% security is, but there appear to be violent contradictions in the
perceptions of insecurity. Is it a binary property in that anything less
than 100% security is equally weak? Hardly. Is it a singly dimensioned
axis or a space requiring multiple units?
Rather than focus on pure security, consider the issue of weakness. For
instance, what is 100% insecurity (other than advesarial telepath)? Even
pure plaintext may not be 100% insecure due to dilution by fake messages.
C.f., the blizzard of nonsense broadcast prior to D-day inorder to drown out
the actual messages tranmitted en clair. Clearly the actual messages were
not 100% insecure.
>
>
> >Anyway, since you are going to ship the keystream on a CD, you will
> >likely make it over a finite period, like one month, and then shut the
> >TRNG down for a while.
>
> The problem with shipping keystream on a CD every month is that we
> have the most critical information for an entire month in *two* places
> (assuming the cipher is for communications). Now we have to guard
> *both* places, *all* the time. We have to pay the guards very well
> and threaten them with death, because the true strength of the OTP
> depends upon not just the data, but also the guards.
>
>
> We could encipher the CD data, or similarly, keep it in a safe. But
> then, if the enciphered result is available, all we need to do to
> break the OTP is to break the protecting cipher (or the safe). Which
> means, of course, that all the claims about unbreakability are gone.
This is an exaggeration in that communications security demands extreme
vigilance no matter what channel management technique is used.
> And then we must never even send the same *plaintext* to more than one
> destination. For if we do, and plaintext becomes available from one
> site, and The Opponents intercept the message to other sites, they
> will know each key used, and can re-write the messages at will.
This is not specific to the OTP. Any cipher vulnerable to a known plaintext
attack has the same problem. In fact, the OTP offers the ability to make
each comm link distinct, eliminating the problem completely.
>
>
> How do we quantify these dangers? Since they are inherent in the OTP,
> how is using one of them *not* "breaking" the OTP? How can we call
> such a design *really* "unbreakable" *even* *if* the keying sequence
> is absolutely perfect? Does "breaking" only apply to predicting the
> sequence? Is the user data thus irrelevant to measuring strength?
>
> >When you restart it, it will be in a slightly
> >different environment, so the 2nd keystream will be different from the
> >1st one in subtle ways. That makes the cryptanalyst's job much harder,
> >since now he must begin collecting information from the start again.
> >
> >One poster has repeatedly suggested shutting the TRNG down
> >deliberately to reset its environment and average out any slight
> >imbalances in its behavior.
>
> That would be Dr. Rubin.
>
> I agree that combining multiple sequences (throwing information away,
> as we do in any sort of post-process hash) can hide the patterns we
> fear may exist. (I am less sure that we must do this day-by-day or
> shut down first.) I see XOR as the minimum possible CRC.
>
> If we had some quantitative measure of the quality of the TRNG
> sequence, I think we could make some statements about the quality of
> resulting hashed sequences. But I don't think we have that measure,
> or at least not one which we can use in practice.
We're back in Art again. What qualities are you looking for in an RNG
sequence? Once the questions are specified it becomes much easier to
measure, analyze, and even discuss the issues.
>
>
> >>This is not to say that an OTP cannot be secure in practice. I am
> >>sure it can. But I am fairly sure that it cannot be PROVEN for a real
> >>system.
> >
> >If you accept inductive proof, then it can be proven within an
> >arbitrarily small error. That's good enough for all but the most
> >impractical situations.
>
> Fine. Define a measure of overall "strength" which applies to any
> possible attack. Quantify the amount of strength we need. Conduct
> quantitative experiments on equipment to show how much strength that
> equipment has. Show the equipment is within bounds. Then prove those
> results will apply to future operation. No problem.
>
> Show me.
I suspect we need definition(s) of weakness(es) rather than one of
security. Peaceis the ideal we deduce from the absence of war. So is
security the ideal we deduce from the absence of weakness.
>
>
> >>It is false reasoning to concentrate on potential strength to the
> >>exclusion of operational problems. If the OTP really was "the best"
> >>cipher, everyone would be using it. They aren't. This is not because
> >>they have never heard of it. This is because most people consider the
> >>requirements for secure OTP use to be essentially impractical. I
> >>think they are right.
> >
> >I am focusing on security only from an analytical point of view. The
> >notorious problems with the actual OTP system are not under
> >consideration.
>
> And yet you wish to move a theoretical proof of "strength," into the
> practical world of a physical TRNG. But why would one do this other
> than to claim some sort of proof of strength in practice?
>
> I think you want the ultimate cryptographic goal -- The Unbreakable
> Cipher. But you don't want to discuss actually using this cipher,
> because the very thing which you would rely upon to make it
> "unbreakable" is also what makes it risky and impractical for most use
> -- which is to say, "breakable."
See theory vs devices above.
>
>
> >I have, however, as have others here, suggested using other methods
> >than a hardware Trng to generate keystreams, which circumvent the
> >problems with orthodox OTP ciphers.
>
> Unfortunately, many of the problems with orthodox OTP ciphers are not
> how we generate the running key, but the fact that we have a key which
> is as large as the message, and which must be both transmitted to the
> other end and held in complete secrecy.
>
> >One is to use a text stream and
> >hash it to death to distill any entropy it has. That text could be
> >keyed to just about anything on the Internet or in print. IOW, all a
> >correspondent would have to do is provide a code book (memorized) that
> >selected the text for the current session based on some short session
> >key.
> >
> >You, among others, seem to agree that if the proper hash is selected,
> >then this method is very secure within practical limits.
>
> I think a hash of text can be made to have full "entropy" in practice,
> if we can select from a wide enough body of text. (But if the text is
> known, and the hash is known, the result will be known, which is
> precisely what we wish to avoid.)
>
> But this is hardly the same as defining security limits, measuring
> closeness to those limits, and thus claiming security on the basis of
> measured values. We do not have those measures. We do not have that
> proof.
>
> >>I am aware of no PROOF that some other TRNG cannot be as good.
> >
> >OK, then one of the best. I do not believe that one can make a more
> >random TRNG than one that is based on radioactive decay.
> >
> >>In fact, if we *have* a TRNG, presumably it *must* be as good as any
> >>other, unless we are to start measuring (and defining!) the quality of
> >>each TRNG. Without such quality specifications, your claim might
> >>imply that no other source of randomness can possibly be called a
> >>TRNG.
> >
> >I am proposing that one can characterize a TRNG by using inductive
> >methods to attempt to break test ciphers produced by that TRNG.
>
> And I am proposing that you try it.
>
> Show me.
Note that a complete taxonomy of cipher attacks would be a complete taxonomy
of cipher weaknesses. Such a taxonomy would be useful because it would
permit an exhaustive analysis of a system and that analysis could be
consider a proof of security.
However it is unreasonable to expect that a complete taxonomy can ever be
defined (by Godel), so proofs of security are probably never going to be
possible. Personally I'd expect proofs of software correctness first (nevah
hoppen).
The skeleton of a complete taxonomy of cipher attacks would still be a
useful tool in designing and analyzing ciphers, in that it would make
rigorous proofs of *IN*security much simpler.
>
>
> >If one
> >can quantify the amount of information that leaks from the ciphers,
> >then one would have a way to quantify the cryptographic strength of
> >the TRNG.
>
> One *cannot* in a general sense "quantify the amount of information
> that leaks." The best one can do is to show probability reductions on
> the basis of particular message probability. (Even this would use
> some statistical approximation to the TRNG, which may not be accurate
> at message size or future time.) At best, it makes sense to talk
> about specific results for specific messages.
>
> But if the results depend upon the message, in what way can we hope to
> apply them to a system which can accept *any* message?
>
> But suppose we can: What *you* want to do is to use strength data
> from particular cases to imply results about the whole, but -- even if
> we could measure strength -- it is useless unless we can reach
> probabilities like 1 in 2**56. This should require 2**112
> measurements, which is impossible -- and all this depends upon having
> such a measure which we do not.
I do not follow the transform from 2^56 to 2^112. How is this derived?
> And where is the probability that a message was weak but this was not
> found by a cryptographer, and so the message counted in error as
> "unbreakable"? Can we reduce *this* below 1 in 2**56?
>
[snip]
> >>Certainly we can measure closeness with respect to any particular
> >>test, but we cannot hope to run all possible tests. So we cannot
> >>prove the machine does what we would like to think it should.
> >
> >The single most important test is the Unbreakability Test. Certainly
> >techniques for breaking stream ciphers are known and can be
> >categorized formally into a heiracrchy that is reasonably complete.
>
> That is blatantly false.
>
> >IOW, the known list of ways to break stream ciphers is reasonably
> >exhaustive.
>
> No.
>
> >If so, then these ways to break stream ciphers could be
> >cast into hypotheses which can be measured for their "probable
> >approximate correctness" using computational induction technique.
>
> And if not...?
>
> >>Can we be 95% sure about any particular test? Yes. Can we be 95%
> >>sure that we have run all possible tests? Only at small lengths where
> >>we can enumerate all possible combinations.
> >
> >I would think that all the important ways to break stream ciphers have
> >been discovered by now. Can you prove with certainty that they have
> >not?
>
> Please. It is *your* "proof." It is up to *you* to prove "with
> certainty" that all possible attacks *have* been discovered.
>
> (Since we do in fact see a continuing flow of new attacks, we might
> reasonably suspect that there are many more out there as yet unknown.
> Indeed, since attacks can have arbitrary complexity, there should be a
> wide array of ever-more-complex attacks yet to be known. With respect
> to the simple OTP, of course, they all come down to finding patterns
> in the sequence. But there are more ways to look for patterns than
> there are patterns.)
This issue can be resolved in the same way the finiteness of of the set of
primes was resolved. Not only is the set of stream cipher attacks not
closed, it is not closable. It is infinite.
Consider that an attack exploits a weakness. Given a set of attacks
purported to be complete, one can always synthesize a weakness (higher order
pattern) that is not used by any attack within the set. That weakness will
be the basis of a new attack.
[snip]
------------------------------
From: [EMAIL PROTECTED] (Ville O S Oksanen)
Crossposted-To: alt.privacy,talk.politics.crypto
Subject: DEMYC's resolution about WASSENAAR arrangement
Date: 28 Feb 1999 16:27:14 +0200
(Democrat Youth Community of Europe is a coalition of about 30 center-right and
christian democratic polical youth organisations around Europe)
DEMYC Condemns WASSENAAR ARRANGEMENT Provisions
Limiting Privacy of Communication
The DEMYC Exceutive Committee Meeting on February 13th 1999
ALARMED because of the fact, the EU Commission is now considering the
implementation of the international Wassenaar dual-use Arrangement;
WHEREAS the use of cryptography implicates human rights and matters of
personal liberty that affect individuals around the world;
WHEREAS national governments have already taken steps to detain and to
harass users and developers of cryptography technology;
WHEREAS cryptography is already in use by human rights advocates who
face persecution by their national governments;
WHEREAS the privacy of communication is explicitly protected by Article
12 of the Universal Declaration of Human Rights, Article 17 of the
International Covenant on Civil and Political rights;
WHEREAS cryptography will play an increasingly important role in the
ability of citizens to protect their privacy in the Information Society;
WHEREAS cryptography will also play an increasingly important role in
securing of commercial transactions over Internet;
FURTHER RECOGNIZING that decisions about cryptography policy may give
rise to communication networks that favor privacy or favor surveillance;
URGES the EU to base its cryptography policies on thefundamental right
of citizens to engage in private communication;
FURTHER URGES the EDU and its member organisations to resists the
implementation of the Wassenaar Arrangement;
RECOMMENDS that the member organisations turn their attention to growing
public concern about the effects of Wassenaar Arrangement, the
widespread use of surveillance technologies and the inflictions on Democratic
Society and Personal Liberty around the world.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************