Cryptography-Digest Digest #450, Volume #9       Thu, 22 Apr 99 11:13:04 EDT

Contents:
  Re: True Randomness & The Law Of Large Numbers (Mok-Kong Shen)
  128 bit DES ("dino")
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Thought question: why do public ciphers use only simple ops like shift and XOR? 
([EMAIL PROTECTED])
  Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (Mok-Kong Shen)

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 22 Apr 1999 16:02:32 +0200

R. Knauer wrote:
> 
> On Wed, 21 Apr 1999 20:56:57 +0200, Mok-Kong Shen
> <[EMAIL PROTECTED]> wrote:
> 
> >Could you tell what Feller means here? Does he mean that a 'normal'
> >coin does not exist in reality (which I certainly agree) or what?
> >Does he want to say some psychologists are wrong or what? I simply
> >can't understand.
> 
> You need to read his book.

This isn't a nice cooperative attitude in scientific discussions.
You have apprently put much effort in studying that work. I was
only requesting some small clarification in order to be able to
discuss with you. Or were you yourself not clear about the question
I raised?? In that case, of course, we should delete that point from
our discussion.

> 
> >Since, however, I consider true
> >randomness to be a theoretical concept that has no exact real-life
> >existence, that impossibility doesn't trouble me at all.
> 
> True randomness has a real-life existence in quantum processes.

Let me say once more about what I believe to be the most troublesome 
for your discussion partners. You claim something and period, without
supporting arguments/clarifications. As I said before, that is
dogmatics not science, independent of whether what you claim is
true or not. I am not a physicist, but I suppose it is not incorrect
to say that quantum theory is yet a 'theory'. A theory may be very
good, in which case it can explain a large number of phenomena but
it is not identical to absolute truth (which one of course can 
never know). The theory of Newton is a theory, so is Einstein's,
the position of which may someday be taken by yet another theory.
You certainly can postulate the existence of true randomness, 
define its characteristics and experimentally discover real physical
events that appear to conform to your definition. But that needs
experiments and measurements. Otherwise the concept exists only in
'theory' and no more. The theoretical 'existence' may even be
logically nice, elegant, etc. etc. But that alone would be useless
for the real world, except as an intellectual entertainment. On the
other hand, if you do experiments, then you need, among others,
theory of statistical tests. Now you claimed the existing tests 
are simplistic. I believe it is very fine that you take up that
attitude, for there could be a chance that through your effort
the science of statistics goes a step forward. But you have to
put forward concrete, convincing and detailed arguments supporting
you claim that the currently existing tests are inadequate, error-
prone, or whatever defects you see, and, if possible, give hints of
directions where you believe the researchers could be fruitful in
their work to develop 'non-simplistic' tests. Merely dogmatically
claiming that the tests are 'simplistic' is non-sensical and useless.
That's what certain fanatics of religion do and is not what scientists
do. For you don't achieve your purpose (of obtaining 'nonsimplistic'
tests someday) that way. To take a certainly a bit far-fetched analogy:
Shouting 24 hours a day that aids should be eradicated doesn't help
to eliminate that disease at all. Better is to think of ways,
technically, politically or what else, that are promising to stop
that epidemy and put the proposal to the relevent persons, or to
organize oneself something that help the illed. Do you get my point 
here?? In fact, in the present context, since you have found certain
weakness in the currently available tests according to your opinion,
you have a good chance to develop the wished for 'non-simplistic'
tests yourself. Why don't try that?? It might give you some solid fame
in science in the near future. As far as I know, you are physicist not 
mathematitian. But many physicists have contributed very good stuffs
to mathematics, opening up new fields of research. Why don't you try 
with your own intellectual capacity to eliminate that misery in your 
opinion instead of dogmatically claiming that all statistical tests 
today are no good for investigating true randomness?? I am sure that 
all readers of this group would praise you if you succceed in 
developing the 'non-simplistic' tests and clearly demonstrate their 
superiority over the 'simplistic' tests that are known in statistics 
today.

> 
> >That presupposes the existence off certain reliable tests. What are
> >they? Any existing or really promising candidates? Note that
> >'reliability' is itself tightly connected to statistical test
> >theories.
> 
> I gave a sketch of how one might go about certifying a radioactive
> TRNG several months ago. You can look it up in the archives.

Yes, that was employing experts to judge the engineering designs.
That (alone) is totally unreliable!!!


> 
> > And so one goes round again in circle, if one denies the
> >applicability of statistical tests.
> 
> I suppose you will never catch on to the fact that I am not indicting
> all statistical testing, only those simplistic small sample tests
> which claim to make a reasonably certain determination of
> non-randomness.

See above about 'simplistic'  tests.


> 
> >I rather suspect that you misunderstood him.
> 
> I am not misunderstanding him one bit. He made his comments in a
> perfectly unequivocal way. It is you who do not grasp what he is
> saying because you are so intimately bound up to the orthodoxy of
> simplistic small sample statistical tests for determining
> non-randomness.
> 
> >Perhaps Prof. Rubin
> >would comment on that. I haven't followed the discussion you mentioned
> >and like to learn what's wrong with I wrote above.
> 
> Just go into the archives and read it for yourself. Try:
> http://www.dejanews.com/home_ps.shtml
> with his full name as the keyword and sci.crypt as the forum. His
> direct comments in this regard are available over the last couple
> weeks.
> 
> >Regarding 'simplistic' I have already commented above. Please
> >give your 'non-simplistic' stuffs to us for PRACTICAL use.
> 
> Look in the archives using my name and the keyphrase "radioactive
> TRNG".
> 
> >Yes, employ experts to judge the engineering design, etc. etc??
> 
> That is only part of it. You must conduct diagnostic tests on the
> subsystems to certify that they are operating according to design
> specification. In particular, you have to be concerned about the
> detection circuit - e.g., deadtime caused by quenching effects and
> pulse pileup in the discriminator electronics.

As I said many times before, these dianostic tests involve measurements.
Measurements have errors. One needs error analysis. To do error
analysis one needs statistical test theories. Are the test theories
needed to do the dianostic tests 'non-simplistic' in you opinion??
If yes, please name these and explain why they are 'non-simplistic'
in your opinion. (If they are 'simplistic' then the whole chain
of your logical argument falls!!) If these tests are indeed 
'non-simplistic', please tell whether they can be employed to test
arbitrarily given sequences. If yes, very fine, for we need no more
discussions. If no, please explain clearly why they are 'non-simplistic'
in one application while 'simplistic' in another.


I like to come back to a point that I dropped in my last response.
Concerning the monobit test, I consider it to be an important test,
for it is in fact a frequency test. If you want a uniform distribution,
then the most basic test is to count the frequency. You may argue
about the sample size. But as I said previously, you can express your
opinion about the optimal size and discuss that matter with people
knowledgeable in statistics. The FIPS 140-1 tests are certainly not
specifed by laymen but by statistitians. I believe they have well
considered the issue of sample size. But, of course, too err is
human. If you spot some weakness, why not state and explain you
points clearly and convincingly so that before long we would have
a revision of the FIPS document??

Another point is what you pointed out several times, namely that with
a (theoretical) fair coin, one could get a subsequence containing
all 0's of arbitrarily given length. That that is possible is true
(under the axioms one use) and, I suppose, well-known. (By the way,
a boulevard newspaper put in headline that last week's German
lottery had among the six winning numbers 2, 3, 4, 5 and 6. That
surprised of course many who know nothing about statistics.) I know
that you point is that such sequences of 0's fail the statistical
tests and (therefore in your opinion) statistical tests are no good
for true randomness. But you clearly neglect hereby the issue of
application. Before arguing further, I like to put an analogy to 
illustrate that not everything perfect is necessarily most useful.
Consider distilled water. It certainly is free of bacteria. But
to use distilled water daily in place of water from the tape is
fatal. Distilled water is valuable for certain medical and chemical
applications but not for common consumption. Now coming back to our
topic: Suppose there is a truly random source and it furnishes me
on demand a sequence of all 0's. I'll say what I consider to be a
rational behaviour in that circumstance. One applies tests. These
fail, of course. One has then two alternatives, both are, I stress,
founded on certain subjectivity, not objectivity. (1) consider
the source to be defective and therefore either seek to find the
probable errors in the source or else try to use another source
of randomness. (2) consider the source to be o.k. and take another
sequence from it. The question that can be asked is what one should
do if the second sequence is again bad, in particular all 0's.
Well, one has again the aforementioned two alternatives. Certainly
the repeated choice of (2) amounts to an infinite loop. But that
fortunately never happens in practice. For a human is not a robot.
Even if he strongly believes that the random source is o.k., he
will after a certain number of trials give up following the path (2),
if only the reason is that he can't wait too long to obtain a
sequence for use in his application. Now one point you raised 
previously is that, since a sequence of all 0's CAN come from a truly 
random process, why shouldn't one accept it and use it. The answer is,
because we have a 'practical' not a 'theoretical' application, we
can't use a sequence of all 0's. For that would reveal our message
directly to the opponent. One may still ask why not, pointing to the
fact the proof of perfect security of the one-time pad does not
exclude the use of such sequences. Well, the answer is again furnished
by the phrase 'practical application'. For we face an analyst who
have at his disposal (presumably) the same statistical tools that
we have. We have no very good tests for true randomness but he is
not in a better position either. If a sequence is considered to
be sufficiently random in our 'practical' sense (which excludes
those with all 0's), that sequence will be sufficiently random for
him too and therefore difficult for him to handle. A sequence that is
not considered to be sufficiently random in our 'practical' sense
(which includes those with all 0's) is also not sufficiently random
for him and easy for him to handle. So at the end the point is not
whether it is o.k. for the source of true randomness to deliver a
sequence of all 0's (it is indeed o.k.), but that it is not o.k.
for us to 'practically' use such a sequence. So I would, if I am
to use any source of randomness (truly random or not) in essential
applications, apply statistical tests to the sequences generated and
discard those that fail the tests. Whether this is correct in the
view of statistitians I am not sure. But I consider this to be one
of the rational behaviours for practical applications. It appears
to be justifiable by common sense of human decisions anyway. I hope 
that I have clearly stated my position to the issue and we could 
certainly discuss if you have opposite opinions.

>> Yes, employ experts to judge the engineering design, etc. etc??

> That is only part of it. You must conduct diagnostic tests on the
> subsystems to certify that they are operating according to design
> specification. In particular, you have to be concerned about the
> detection circuit - e.g., deadtime caused by quenching effects and
> pulse pileup in the discriminator electronics.

But these diagnostic tests involve statistical tests which in your
judgement are 'simplistic'. See my response above.


>> If you simply DEFINE a quantum process to be equivalent to a truly 
>> random process, then there would be nothing to discuss. But that is 
>> not a scientific attitude. One should be able to do measuments to
> make sure experimentally that certain statements in applied sciences
> are true.

> In the first place, it is a tenent of orthodox QM that the underlying
> processes are truly random. That has been experimentally confirmed
> many times over. If you do not accept the intrinsic randomness of
> quantum processes then you have a very serious problem on your hands.

May I take the liberty to use the same 'spear' and say that if
you do not accept the tests that the statistitians have developed
for testing your sequences then you have a very serious problem
on your hands???


> >O.K. You perform measurements to determine the detector deadtime.
> >Doesn't that have to do with statistical tests, confidence level, etc.??
> 
> It most certainly has a lot to do with statistics. But it is not the
> same as attempting to determine the non-randomness of a sequence of
> bits directly using statistical tests.
> 
> You have failed to make this crucial distinction: I am not faulting
> statistical measures in general, only as they pertain to the direct
> determination of the non-randomness of an output sequence.
> 
> Sequences are themselves not random or not. It is the process which
> generates them that is either random or not. Using simplistic small
> sample statistical tests on output sequences does not give you
> anything of reasonable certainty about the process that produces them.

I suppose I have covered the points here, especially what concerns
'simplistic' tests, above. Please respond to my stuff there.

M. K. Shen

------------------------------

From: "dino" <[EMAIL PROTECTED]>
Subject: 128 bit DES
Date: 22 Apr 1999 13:49:00 GMT

Hi again
I need some help. My customer ask a crypto application using 128 bit DES. I
implemented a double DES with two 64 bit keys. Is that equivalent?
If the answer is negative, is a triple DES with two 64 bit keys equivalent
to a single 128 bit DES?
I apologize for my poor english but i hope someone understand my question
:-)
thank you 

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 22 Apr 1999 14:43:07 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 22 Apr 1999 16:02:32 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:

>> You need to read his book.

>This isn't a nice cooperative attitude in scientific discussions.
>You have apprently put much effort in studying that work. I was
>only requesting some small clarification in order to be able to
>discuss with you. Or were you yourself not clear about the question
>I raised?? In that case, of course, we should delete that point from
>our discussion.

I said you need to read the book, because that is the only way to get
the answer to your question.
 
>Let me say once more about what I believe to be the most troublesome 
>for your discussion partners. You claim something and period, without
>supporting arguments/clarifications.

Huh?! What are you talking about?

I have offered far more in support of my position than anyone else
here has offered in support of the contrary position. I have offered,
among other things, direct quotes from respected books on the subject.

>> I gave a sketch of how one might go about certifying a radioactive
>> TRNG several months ago. You can look it up in the archives.

>Yes, that was employing experts to judge the engineering designs.
>That (alone) is totally unreliable!!!

I suggested far more than that. Just look in the archives.

>As I said many times before, these dianostic tests involve measurements.
>Measurements have errors. One needs error analysis. To do error
>analysis one needs statistical test theories. Are the test theories
>needed to do the dianostic tests 'non-simplistic' in you opinion??

You are not paying attention. I said repeatedly that I am not
indicting statistical measurement in general, only with regard to a
direct determination of non-randomness from an output sequence, and
then only when simplistic small sample statistical tests like the
FIPS-140 Monobit Test are used.

I give up with you. You are either deliberately trolling me or you are
incredibly dense.

<plonk>

Bob Knauer

European Parliament's Scientific and Technological Options Assessment,
Appraisal of Technologies of Political Control, including Mark-Free
Torture, implemented by the British military in Northern Ireland:
http://jya.com/stoa-atpc.htm


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Thought question: why do public ciphers use only simple ops like shift 
and XOR?
Date: Thu, 22 Apr 1999 13:50:15 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (John Savard) wrote:
> ...
>Two comments are warranted here.
>
> >- Since cryptanalysis represents the "hard" part of the work in designing
> >a cipher, this is why cipher designers should themselves know something
> >about cryptanalysis;
>
> >- And I think you can see why this design process actually _increases_ the
> >probability of a design which is strong against known attacks, but weak
> >against a future attack someone might discover.


    This is an extremely radical statement. Do you know of others who
    argue in the same vein?

    Personally, I have often expressed the opinion that the biggest
    security risk in cipher design is the possible discovery of a
    catastrophic attack method against cipher designs considered
    strong today. A catastrophic attack would be an attack that can be
    used in _practice_ to uncover the secret key based on only a
    handful of known plaintexts. If this should happen in the future
    and work against a standard cipher, the repercussions could be
    worse than the Y2K error.

    Now I have argued that a possible defense against unknown attacks
    are ciphers that have as little internal structure as possible. My
    reasoning is that a catastrophic attack will probably take
    advantage of some characteristic or weakness of the cipher's
    structure. If a cipher has little structure then it will be less
    likely to have that weakness. Now, what you are saying is I think
    more radical: you are saying that current cipher design
    methodology based on analysis against known attacks not only fails
    to strengthen the new ciphers against unknown attacks but actually
    makes them weaker.

    Super-encipherment, where several distinct ciphers, preferably
    with distinct design philosophies, are combined in series is
    another albeit slower defense against unknown attacks. The
    reasoning is that it is unlikely that an attack would be powerful
    enough to penetrate all different key-schedule methods and layers
    of rounds. There is another advantage here: there may exist a
    "General Cryptanalytic Theory" that can be used to analyze and
    catastrophically break _any_ cipher whose workload is bellow some
    limit, i.e. any cipher that is fast enough. A slow and complex
    "Super-Cipher" would hopefully exceed this limit. I wonder if
    concurrently to the fast AES, we shouldn't have a standard
    Superencipherment algorithm scalable in speed. Really important
    security could then be done at orders of magnitude less speed than
    the AES, possibly at a few kilobytes per second on a PC.

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Thu, 22 Apr 1999 16:53:26 +0200

SCOTT19U.ZIP_GUY wrote:
> 

> for a quick not complete explained it is this:
> 1) the compressed file ends exactly on a byte boundary and sent as suchh
> 2) the compressed file has filler in the last byte.
> 3) the compressed file is truncated so last symbol is not fully written

I suppose that for files that are to compressed not all 256 symbols
of ASCII are used. So one could certainly choose one that is not
used as the end-of-file symbol and then fill the last byte with
anything, if needed. That would give the receiver the certitude
that the file indeed ends and has not been truncated. What is your
opinion? (One could use a pair or a triple of symbols as end-of-file, 
if all 256 symbols are used.)

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to