Cryptography-Digest Digest #151, Volume #9       Sat, 27 Feb 99 07:13:02 EST

Contents:
  Re: Testing Algorithms ("Trevor Jackson, III")
  Re: Block cipher in the smallest PIC ("Tim Fowle")
  Re: True Randomness (Dave Knapp)
  Re: Define Randomness ("Trevor Jackson, III")
  Re: Define Randomness (Patrick Juola)
  Re: Testing Algorithms (Somniac)
  Re: One-Time-Pad program for Win85/98 or DOS (Daniel Kinnaer)
  Re: Benchmarks (Robert Harley)
  Re: True Randomness - DOES NOT EXIST!!! (wtshaw)
  Re: Miller-Rabin prime test. Random bit size ([EMAIL PROTECTED])
  Re: New high-security 56-bit DES: Less-DES ([EMAIL PROTECTED])
  Re: IDEA test vectors? (Dominik Werder)
  Re: True Randomness ("Douglas A. Gwyn")

----------------------------------------------------------------------------

Date: Sat, 27 Feb 1999 01:12:28 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms

Doggmatic wrote:

> In article <7b698o$a4$[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] (Patrick Juola) wrote:
> > In article <7b53f6$649$[EMAIL PROTECTED]>,
> > Doggmatic  <[EMAIL PROTECTED]> wrote:
> > >In article <[EMAIL PROTECTED]>,
> > >  [EMAIL PROTECTED] (Steven Runyeard) wrote:
> > >> >No.  The guess is only as valid as the assumptions it is based upon.
> > >> >Since you have based yours on nothing concrete, your guess is pretty
> > >> >useless.
> > >>
> [snip]
> > >> different. Let's look back at the technology surrounding Babbage in
> > >> the late 1800s. If anyone had suggested to him that within 100 years
> > >> someone could build a processor about an inch square that could
> > >> perform 2,000,000,000 instructions per second he would have sent them
> > >> to the nearest nut house. It would have taken a massive leap of faith
> > >> to believe it was possible. I feel that in another 100 years we would
> > >> have made an equally 'unbelievable' leap in technology. Don't limit
> [snip]
> > >> Steve
> > >>
> > >
> > >  Okay .. given all that, as long as your computer is made of matter (or
> > >even, anti-matter, I conjecture), it will conform (nasty word!) to the laws
> > >of thermodynamics.  Bruce Schneier brought up the point in his book.  Every
> > >action takes a discrete amount of energy to perform and thus, even if your
> > >computer can load registers at speeds approaching light-speed, you still have
> > >to power it.
> >
> > This point is incorrect, no matter how many times Schneier's book is
> > quoted.  There is *no* minimum energy required for computation.
> >
> >       -kitten
>
>  That isn't "Schneier's Law" buddy .... that's the 2nd Law of Thermodynamics,
> which gives time a direction and describes this universe (on the assumption
> that it is a closed system).  The assumption may be debatable, but I don't
> too often see dropped plates leaping back onto the table mending themselves,
> no matter how many times I bump the pieces after they are on the floor.  Any
> state change in this universe causes entropy to increase and takes a GREATER
> THAN ZERO amount of energy.

This issue is getting mangled.  If there is a floor on the energy of computation,
say 1e-40 erg/bit-change, then we can calculate the energy required to count up to
2^256 or any other calculation.  For irreversible computation this is the case (I
don't know the value or dimensions of the floor).

For reversible calculations this is not the case.  There is not floor.  Thus, in
principle, caclulationas can be made with arbitrarily high efficiency.  For
instance, 1e-400 erg/bit-change would make it possible for enumerate 2^256 without
using the solar phoenix reaction in the power supply.

>  As it has been stated many times in this thread,
> if you're assuming that the intrinsic physical limits of this universe can be
> broken, then there is little point in this debate.  I'm sticking with this:
> 256-bit keys will be infeasible to brute-force by any computer in this
> universe, which draws power from this solar system.  (Poke a hole in THAT one
>  :-)
>
>    ___/Mike  ...two legs good, four legs bad? ... Why conform?
> __/.   |      For my next trick, WATCH as this humble mouse breaks
> \-__   \___   Windows at the mere press of a button.
>     \          Hey! Where are we going, and why am I in this handbasket?
>
> -----------== Posted via Deja News, The Discussion Network ==----------
> http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own




------------------------------

From: "Tim Fowle" <[EMAIL PROTECTED]>
Subject: Re: Block cipher in the smallest PIC
Date: Thu, 25 Feb 1999 12:00:45 -0000


Im after a small algorithm for hardware as well, I know that the TEA
algorithm is broken but how poor is Tea extension proposed by Needham and
Weehler march 97

Bruce Schneier <[EMAIL PROTECTED]> wrote in article
<[EMAIL PROTECTED]>...
> On 24 Feb 1999 10:30:47 -0000, Paul Crowley
> <[EMAIL PROTECTED]> wrote:
> 
> >It sounds like you want this: Gideon Yuval, "Reinventing the Travois:
> >encryption/MAC in 30 ROM bytes", Proceedings of Fast Software
> >Encryption Workshop 1997.  This system, designed for the 8051, uses an
> >eight-byte key and whatever program you happen to blow into the rest
> >of the PIC as an extra source of nonlinearity.  I don't know if the
> >paper can still be found online.
> 
> That cipher is broken.
> 
> Bruce
> **********************************************************************
> Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
> 101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
>            Free crypto newsletter.  See:  http://www.counterpane.com
> 

------------------------------

From: Dave Knapp <[EMAIL PROTECTED]>
Subject: Re: True Randomness
Date: Sat, 27 Feb 1999 06:29:54 GMT

"R. Knauer" wrote:
> 
> Would you recommend a book [about maximum likelihood and hidden Markov 
> models] that is accessible to the Informed Layman?

   As has been pointed out previously, it's not necessarily easy to find
a book on either one that is accessible to _anyone_.  I learned about
both by using them for specific applications.  

   I think an understanding of Markov processes is absolutely
fundamental to any discussion of "randomness," and an understanding of
hidden Markov models is crucial to any discussion of whether one can
determine if a given bitstream is "random."  Similarly, I think that
likelihood is (or ought to be) central to any estimation process, and
any inference about a bitstream that you would derive from it.

   But you will find that the books written about hidden Markov models
are almost all about speech processing, and the books about likelihood
are all about parameter estimation.  They won't be directly applicable,
but the ideas contained in them will. You might try:

Elliott, Aggoun, and Moore, Hidden Markov Models : Estimation and
Control (Applications of Mathematics, Vol 29);  Springer Verlag; ISBN:
0387943641

MacDonald and Zucchini, Hidden Markov and Other Models for
Discrete-Valued Time Series (Monographs on Statistics and Applied
Probability, 70) Chapman & Hall; ISBN: 0412558505

Sinclair, Algorithms for Random Generation and Counting : A Markov Chain
Approach (Progress in Theoretical Computer Science) Birkhauser; ISBN:
0817636587

Edwards, Likelihood;  Johns Hopkins Univ Pr; ISBN: 0801844436
(this one is good in that it explains pretty well the philosophical
foundations of likelihood and the difference between likelihood and
probability).

Parametric Statistical Models and Likelihood; Springer Verlag; ISBN:
3540969284

Eliason, Maximum Likelihood Estimation : Logic and Practice (A Sage
University Papers Series : Quantitative Applications in the Social
Sciences, No 96) Sage Pubns; ISBN: 0803941072

------------------------------

Date: Sat, 27 Feb 1999 01:35:08 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Define Randomness

R. Knauer wrote:

> On Fri, 26 Feb 1999 00:36:17 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>
> >> You are far too quick to jump to conclusions until the issue has been
> >> completely explored. Is that because you do not want someone
> >> challenging your pat notions about crypto?
>
> >No.  I'm bored by an impenetrably stupid audience.
>
> That is an incredibly arrogant statement to make on a Usenet forum,
> which by definition has a very mixed audience. Such parochialism was
> supposed to have disappeared after the onset of the Renaissance.

I don't think anyone else will mind that statement.

>
>
> >> For one thing, the assumption that the sequence is correlation-free is
> >> unfounded for an actual sequence, especially one that has noticable
> >> bias. It is possible that the mechanism that is producing the bias
> >> could also be producing some kind of long-range correlation that
> >> becomes short-range correlation when bits are removed by the vN
> >> procedure.
>
> >I've addressed these topics three times.  I have failed to communicate the
> >issues involved three times.  Is there any reason for either of us to believe I
> >might be successful the fourth time?
>
> Perhaps - if you make your comments clear enough to be understood. The
> last three times amounted merely to pontifications on your part. Why
> not try to address the explicit questions raised instead of trying to
> snow job everyone.
>
> My question is very clear - how do you know that the vN procedure (or
> any general anti-skewing procedure) does not cause correlations to
> become significant in the keystream? Assume that there is always some
> degree of correlation present in any real world keystream, which is
> insignificant before the keystream was anti-skewed.
>
> Your challenge is to explain your position in a way that people can
> understand. Anybody can blow smoke.
>
> Bob Knauer
>
> "Did you ever notice that when a politician does get an idea
> he usually gets it all wrong?"




------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Define Randomness
Date: 25 Feb 1999 08:58:47 -0500

In article <[EMAIL PROTECTED]>,
Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
>>
>> 2) I have not been convinced that anti-skewing does not introduce
>> correlations, which would make the output unsuitable for crypto-grade
>> ciphers. After all, the anti-skewing procedure is algorithmic, so
>> there is always the opportunity to introduce correlation(s).
>>
>> The von Neumann method looks innocent on the surface since the
>> probability of dibits 01 and 10 are the same, namely p*(1-p). But that
>> says nothing about the correlations inherent in their appearance. If
>> you start throwing out bits like in the von Neumann method, who knows
>> what patterns are left behind.
>
>That is exactly the point.  No one knows.  I.e., it is unpredictable.

No.  The point is that the vN method does not *introduce* patterns.

If there are longer-range correlations in the input data, the vN method
will not remove long-range correlations.  What it *will* do is remove
bias from independent bits.

It's an easy proof; I've sent it to you.

        -kitten


------------------------------

From: Somniac <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms
Date: Fri, 26 Feb 1999 22:58:27 -1000

D wrote:
> 
> I'm sure that there are many ways to test an algorithm besides the ways I
> can think of, but I have a few suggestions:
>     You can test to see if the output is some sort of simple function of the
> input and things like the next block of ciphertext, another encrypted file's
> corresponding block, etc.
>     You can check to see if the output has any common charactarictics.
>     You can get to know your system better by computing the combined
> function through all the rounds.
>     You can test the Strict Avalanche Criteria, which specifies that if 1
> bit in the input changes, at least half of the output bits should change.

This seems to be a mistake. The "at least" part is not accurate. "On the 
average" half of the output bits should change. If you require "at least" 
 half of the output bits change for every single bit input change, then 
what do you expect to happen if multiple input bits change? Will more 
than half of the output bits change in all cases of input bit changes? 
Then you will run out of codes to appear in the outputs, assuming the 
input block has the same size as the output block.

------------------------------

From: [EMAIL PROTECTED] (Daniel Kinnaer)
Crossposted-To: alt.security,alt.privacy
Subject: Re: One-Time-Pad program for Win85/98 or DOS
Date: Sat, 27 Feb 1999 07:36:20 GMT

On Sat, 27 Feb 1999 03:39:52 +0100, Helmut Kreft
<[EMAIL PROTECTED]> wrote:

>
>I suggest you delete this programm from your cryptography software
>collection. This implementation of OTPs is cryptographically weak.
>
>
>   Helmut

I would like to know why you think a OneTimeKey is cryptographically
weak.  Or is it just this implementation?

Regards

Daniel

------------------------------

From: Robert Harley <[EMAIL PROTECTED]>
Subject: Re: Benchmarks
Date: 21 Feb 1999 13:03:03 +0100


Medical Electronics Lab <[EMAIL PROTECTED]> writes:
> [...] the field
> sizes for ECDH are smaller so you need fewer cycles to do anything,

Yes, absolutely.


> and because it's all bit manipulation instead of multiplication
> it goes *lots* faster.

No, not at all.

The bit manipulation IS multiplication, but of elements in GF(2^n)
instead of GF(p).  So bit manipulation is necessary because there is
no hardware multiplier to help.  That causes a slow-down whereas for
GF(p) you can use the multiplication instructions on just about any
CPU under the Sun to go faster.

Also current knowledge suggests that using GF(p) is at least as secure
and probably more so than GF(2^n).  But as long as companies with more
money than sense are willing to pay for less than the best, I suppose
they will continue to get it.

Rob.

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: True Randomness - DOES NOT EXIST!!!
Date: Sat, 27 Feb 1999 02:17:57 -0600

In article <7b7kgs$cgu$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Bill
Unruh) wrote:

> Unfortunately you seem not to have heard of physics developed 100 years
> ago, called quantum mechanics. Two identical system, set up identically
> will not give the same results in the future. 
> 
Pardon me telling about this in a true example.  I once knew a fellow who
designed a black-box gismo which was destined to be developed and produce
by someone in the near future....and ultimately was...now making lots of
folks tons of money.   

Back to the story: The designed was just right, after all he had made one
of them actually work with off the shelf parts.  He had also contracted
for making many of them, long term contracts with overseas suppliers, and
had gotten lots of funding for the project he believed in from a bunch of
folks.

OK, I say, let's make a few of them and see if any more work before going
into mass production...meanwhile, the boxes of thousands of parts are
already arriving and stacking up.  After making short runs of 3 units
several times, and getting wildly differing functions, *final* design
changes are made.

So, now do ten, and only two work....must be the parts, make 25.....and a
half a dozen work.  The results get no better,  parts differ.....a design
can be in the noise of variations.  What we were making were truely
illustrative of non-reproducable systems, and no amount of minor changes
would cover for bad gross design.  A good design allows considerable
variation and still functions.  Good design for analog randomness is by
its very nature going to bad design for almost anything else.

All products will exhibit some variations, some of which might sneak up
and bite you, whether you think you are a good prognosticator of potential
problems or not.

And, time has a way of changing parts values, especially seen in
electronics parts.  Digital is supposed to remedy such problems, but
digital still must live in an analog world in which everything is
important, and parts can do you in, big time...I once had a shipment of
7490's from El Salvador that took spells of dividing by eight rather than
by ten...but that's another story.
-- 
A much too common philosophy: 
It's no fun to have power....unless you can abuse it.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Miller-Rabin prime test. Random bit size
Date: Sat, 27 Feb 1999 11:01:24 GMT

In article <7b74jg$hf2$[EMAIL PROTECTED]>,
  "Erwin Molendijk" <[EMAIL PROTECTED]> wrote:
> When performing the Miller-Rabin test for prime n,
> you have to choose a random number A, 2<= A <=n-2.
>
> Is it ok to choose a random A between 2 and the
> max machine integer?   (2 <= A <= 0xFFFFFFFF)
>
> (Ofcourse:  n >> 0xFFFFFFFF)
>
> Btw, I choose A=2 for the first test.
>
> Regards,
>   Erwin
>
>
     In my implementation of the M-R test I used a number (16) of the
same small primes that I used to 'pretest' the candidate by division.
If the number is a liar to a prime base, the chances are that it will be a
liar to a base that is some multiple of that prime.
     If interested in my 32-bit implementation, e-mail me.
Robert G. Durnal
Web pages at www.afn.org/~afn21533
  and members.tripod.com/~afn21533

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: New high-security 56-bit DES: Less-DES
Date: Fri, 26 Feb 1999 10:22:40 GMT

In article <[EMAIL PROTECTED]>,
  Bryan Olson <[EMAIL PROTECTED]> wrote:
>
> [EMAIL PROTECTED] wrote:
> >   Bryan Olson <[EMAIL PROTECTED]> wrote:
> > > [EMAIL PROTECTED] wrote:
> > > > I must again recall
> > > > that completely misleading "example" for DES that you wrote here and
never
> > > > recalled.
> > >
> > > It was you, not I, who said it was for DES.  I regret that the
> > > initial presentation confused a reader on that point, but later
> > > I specifically clarified that I was not assuming DES.
> >
> > This maneuver will not work, Bryan. No cipher system that obeys the laws of
> > arithmetics will ever satisfy that "example" of yours.  I suggest that you
> > dragging on this will just lend more support to my second affirmation that
> > you are fudging data here, which is entirely unaceptable:
>
> If you think no secrecy system could produce such an example, then I
> welcome you to re-follow-up my Jan 16'th post, in which I argued the
> opposite.  You snipped all mention of it in your reply.

That "example" also was not quantitative and did not lead *to the numbers*
you posted before. Remember, another poster also complained that it was
obviously wrong to quote such probabilities.

>
> [...]
> > > > > Again I must decline.  "Unicity distance" is a term of art in the
> > > > > discipline.
> > > >
> > > > :-) which I point out is incorrect.
> > >
> > > Terms of art are not correct or incorrect; they are simply
> > > words with meanings.
> > ...
> >
> > I can't help but recall Humpty-Dumpty, so I will stop reading your msg here.
>
> I don't think one's reading choices call for publicly posting.

Sorry... that is the rule. When I have to invoke Humpty-Dumpty then it is time
to stop reading -- if it is a technical matter, of course.

But, you still did not answer my didactical question. I will pose it again:
"What is the distance of your hand?"

Note that this question is analogous to ask: "What is the unicity distance of
cipher C?"

To help further, I will ask another question: "What is the number of fingers
in your hand?" ... and its analogous: "What is the unicity of cipher C?"

Cheers,

Ed Gerck

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (Dominik Werder)
Subject: Re: IDEA test vectors?
Date: Sat, 27 Feb 1999 11:30:02 GMT

On Thu, 25 Feb 1999 23:39:11 -0800, [EMAIL PROTECTED] (Wei Dai)
wrote:

>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
>says...
>> are there other vectors too? (URL?)
>
>Here is a bunch more:
[..]
hey, that should be enough for the first time, thank you!!!!
DoMiNik

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness
Date: Sat, 27 Feb 1999 03:26:53 GMT

"R. Knauer" wrote:
> On Fri, 26 Feb 1999 10:54:40 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
> >[on hidden Markov models and maximum-likelihood estimation]
> >Since these two concepts are quite fundamental to applied probability, I
> >would say that any book that does not mention them is pretty seriously
> >deficient.

At least, it's not a comprehensive treatise on modern applied
probability and statistics.

Maximum likelihood is a standard goal (there are others) for
estimation of model parameters:  determine the parameters that
maximize the likelihood of the observed training data.

HMMs are very important in speech recognition and gene protein
sequence modeling, among other applications.

> Would you recommend a book that is accessible to the Informed Layman?

I don't know what an Informed Layman is, but if you have a good
foundation in mathematics, try Frederick Jelinek's "Statistical
Methods for Speech Recognition" (MIT Press, 1997).

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to