Cryptography-Digest Digest #300, Volume #9       Tue, 30 Mar 99 03:13:04 EST

Contents:
  Re: How do I determine if an encryption algorithm is good enough? (R. Knauer)
  Re: How do I determine if an encryption algorithm is good enough? (R. Knauer)
  What is fast enough? ([EMAIL PROTECTED])
  Re: What is fast enough? (Sundial Services)
  Re: New Encryption Algorithim (Jim Gillogly)
  Re: What did Gilbert Vernam look like? (David Hamer)
  Re: newbie question ("Kryspin Ziemski")
  Re: What is fast enough? ([EMAIL PROTECTED])
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Wanted: free small DOS Encryption program (Milton Pomeroy)
  Re: newbie question (David A Molnar)
  Re: Tripple DES key length. (Patrick Juola)
  How to avoid double random numbers? (Henrik Zawischa)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: How do I determine if an encryption algorithm is good enough?
Date: Tue, 30 Mar 1999 02:35:49 GMT
Reply-To: [EMAIL PROTECTED]

On 29 Mar 1999 14:21:41 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>Penicillin, for example, was not due to "unreasonable men"; the
>discovery of penicillin was a fortuitous accident

I would be more inclined to attribute the discovery of penicillin to
an effort that can be characterized as "probably approximately
correct" induction.

Fleming (IIRC) tried many hypotheses and each one was refined by the
inductive process, like Sherlock Holmes would do to solve mysteries.

I fully appreciate that monumental discoveries appear serendipitous,
but having been a scientist, I fully concur with Edison that discovery
is 99% persperation.

>that happened to
>someone who had been spending his career in the search for antibacterial
>and antibiotic substances for medical purposes.

Which supports the claim above.

>What, the Zeitgeist fairy tickled their thought patterns?  I doubt it.
>No one needed calculus at the time.  More bluntly, I call "bullshit"
>on this assertion.

The record shows that Newton used an incredible amount of raw
intuition to arrive at his discoveries, as the book "Isaac Newton: The
Last Sorcerer"  by Michael White describes in great detail.

His concept of "action at a distance" was lifted from astrology and
alchemy.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: How do I determine if an encryption algorithm is good enough?
Date: Tue, 30 Mar 1999 02:38:37 GMT
Reply-To: [EMAIL PROTECTED]

On 29 Mar 1999 15:27:31 -0500, [EMAIL PROTECTED] (Patrick Juola)
wrote:

>>>Extraordinary claims require extraordinary evidence.

>>Occam's Razor states otherwise. It is usually the less extraordinary
>>claims that requires the more extraordinary evidence.

>What?

>Add William of Occam to your suggested re-reading list.

Maybe you need to tell us what your definition of "extraordinary" is.

The Michaelson-Morley experiment was a simple one, yet the result was
extraordinary. By contrast, the rather mundane exposition of
tomorrow's weather requires an extraordinary amount of effort.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

From: [EMAIL PROTECTED]
Subject: What is fast enough?
Date: Tue, 30 Mar 1999 01:13:49 GMT

Jack Lloyd and I are currently working on a cipher together.  I was just
wondering (from the communities point of view) what is acceptable speeds?

Right now, in the unoptimized C code, on a 233mhz Cyrix MII, I get between
1.4MB/sec and 2.9MB/sec (32 rounds and 16 rounds respectively).

Isn't anything above 1MB/sec considered fast enough? I mean my hd controller
only works at 4.5MB/sec anyways!

Thanks,
Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

Date: Mon, 29 Mar 1999 18:51:44 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: What is fast enough?

[EMAIL PROTECTED] wrote:
> 
> Jack Lloyd and I are currently working on a cipher together.  I was just
> wondering (from the communities point of view) what is acceptable speeds?
> 
> Right now, in the unoptimized C code, on a 233mhz Cyrix MII, I get between
> 1.4MB/sec and 2.9MB/sec (32 rounds and 16 rounds respectively).
> 
> Isn't anything above 1MB/sec considered fast enough? I mean my hd controller
> only works at 4.5MB/sec anyways!


Beauty is in the eye of the beholder.  If you were dealing with the
situation that Bruce-S was talking about in "Live from the AES
conference," about a system that was doing 1,100 transactions per
second, a particular algorithm might not be fast enough.

"Fast enough" is, of course, "fast enough for this application, while
producing acceptable [for this application] security [against the
threats this application is likely to encounter]."

------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: New Encryption Algorithim
Date: Mon, 29 Mar 1999 16:46:52 -0800
Reply-To: [EMAIL PROTECTED]

[EMAIL PROTECTED] wrote:
> 
> I've written a new encryption algorithim with a 64-bit key and a 64-bit
> block. (It's unnamed, any ideas?) I would like to submit it for public
> discussion.

Well, there's a missing semicolon.  The first S transformation (key sched?)
appears suspiciously regular: the top bits will usually be from K0 and
the bottom from K1, and K0-K1 is constant.  There are a lot of rounds,
leading to the suspicion that this number was picked at random rather
than for maximal efficiency given the desired security.  The key is too
short, and so is the block length.  You haven't given a decryption
function.

What do <you> like about it?  What analysis have you done?

-- 
        Jim Gillogly
        Sterday, 8 Astron S.R. 1999, 00:39
        12.19.6.1.2, 4 Ik 15 Cumku, Fourth Lord of Night

------------------------------

From: David Hamer <[EMAIL PROTECTED]>
Subject: Re: What did Gilbert Vernam look like?
Date: Mon, 29 Mar 1999 23:01:12 -0500
Reply-To: [EMAIL PROTECTED]

John Savard wrote:

> >I am trying to find an image file of Gilbert Vernam and/or Joseph
> >Mauborgne without any success. Has anyone ever run across a picture of
> >Vernam? If so could you refer me to the book/website?
> 
> I'm going to have to look at my copy of Kahn's _The Codebreakers_: I
> _thought_ there was a picture of Vernam in it.

You're correct....Vernam and Mauborgne both appear in the
plates between pages 588-589....!

DHH
-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
David Hamer                The Crypto Simulations Group
[EMAIL PROTECTED]      or     [EMAIL PROTECTED]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------

From: "Kryspin Ziemski" <[EMAIL PROTECTED]>
Subject: Re: newbie question
Date: Mon, 29 Mar 1999 21:46:16 -0500

i was not specific enough in my question
public key cryptography works on the principle that there is a mathematical
relation between the public key and private key
if you knew the specific algorithm to convert from the public key to the
private key then you could decrypt the message. couldn't you then analyze
the code that decrypts the message and find the specific parameters
( by this i mean if the general algorithm was y = a*x + b find  a and b,
simple example but you get my meaning ) am i missing something ther or is an
actual problem but there is a counter measure.
what would stop me from just debugging a dll and knowing the public key of
the person that the message was sent to.
Sundial Services <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Kryspin Ziemski wrote:
> >
> > what's to stop someone from analyzing the byte level code of the
ecrypting
> > program to crack the encrypted message. This a general question but i
would
> > like it to be applied to such things public-key cryptography.
>
>
> Cryptography -assumes- that you know the encryption algorithm.  You
> might even have lots of ciphertext and some known or suspected
> plaintext!  Yet a "strong enough" algorithm must defeat your attack long
> enough (days, months, years) that you either give up or the information
> has become useless.
>
> You could be staring at the source-code to PGP, knowing precisely how
> the message(s) you are looking at were encrypted, able to dink around
> with the program itself to your heart's content ... and the idea is, you
> still can't figure out what the message says and/or you can't discover
> the key given that (say) you know what the message says.
>
> And so on.



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: What is fast enough?
Date: 30 Mar 1999 01:41:11 GMT
Reply-To: [EMAIL PROTECTED]

[EMAIL PROTECTED] writes:
>Jack Lloyd and I are currently working on a cipher together.  I was just
>wondering (from the communities point of view) what is acceptable speeds?
>
>Right now, in the unoptimized C code, on a 233mhz Cyrix MII, I get between
>1.4MB/sec and 2.9MB/sec (32 rounds and 16 rounds respectively).
>
>Isn't anything above 1MB/sec considered fast enough? I mean my hd controller
>only works at 4.5MB/sec anyways!

100baseT is 12.5MB/sec.

-- 
Lamont Granquist ([EMAIL PROTECTED])
ICBM: 47 39'23"N 122 18'19"W

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Tue, 30 Mar 1999 03:04:17 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 30 Mar 1999 02:25:16 GMT, [EMAIL PROTECTED] (R. Knauer)
wrote:

>For example, let's say that we watch the ink particles diffuse 1
>million steps. The extremes are at +- 1 millon. Therefore the inner
>range we are considering (+- 5%) is at +- 10,000 from the origin,
>which is not all that small.

Typo: That should read +- 50,000, which is even larger than the
incorrect value of 10,000.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------

From: Milton Pomeroy <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Wanted: free small DOS Encryption program
Date: Tue, 30 Mar 1999 06:20:20 GMT

Wanted - a free DOS-based encryption program which is small, fast,
         strong and friendly

Explanation

I want recommendations of encryption software to store small amounts of
sensitive information (up to 30kbytes) for my own use - i.e. I encrypt it,
and I decrypt it.  Since I plan to carry the encrypted datafile and
encryption software on floppy disk and use it on various PCs (some of which
may not be owned by me), I plan to use it from DOS (don't want to load it on
PC, don't want any temporary decrypted data left on the PC's hard-disk).  The
PCs will be running DOS, Win95/8, or WinNT.  Typically, I'd run it from the
floppy in a DOS-Window.

The mandatory requirements therefore are:

  (1) runs from DOS (and DOS-prompt in a DOS-Window)

  (2) freeware/public domain

  (3) be accessible to someone (like me in Australia) who is outside USA
        (no export restrictions)

  (4) works in 450kBytes or less of RAM

  (5) already compiled i.e. an EXE or COM version is available
       (I don't want the uncertainty of my doing a poor compilation
       resulting in a poor security implementation)

  (6) EXE or COM file must be small - less than say 80kbytes
       (if it's large, like PGP for DOS at around 400kbytes, it takes
       over 10sec to load from floppy-drive)

  (7) fast execution - less than say 5 seconds to load from floppy
      and complete encryption/decryption of up to 30kBytes of data
      on a 486-66

  (8) can run from a floppy with any temporary files being stored on the
      floppy

  (9) strong (at least 80-bit-key strength)

 (10) user-ready incl enough documentation to be used directly without doing
        programming, compilation etc

 (11) algorithm has some pedigree - i.e. has widespread degree of respect
        in the crypto community

 (12) implementation (inc. compilation) should have some pedigree - i.e. has
           widespread degree of respec

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: newbie question
Date: 30 Mar 1999 05:31:20 GMT

Kryspin Ziemski <[EMAIL PROTECTED]> wrote:
> i was not specific enough in my question
> public key cryptography works on the principle that there is a mathematical
> relation between the public key and private key
> if you knew the specific algorithm to convert from the public key to the
> private key then you could decrypt the message. couldn't you then analyze
> the code that decrypts the message and find the specific parameters
> ( by this i mean if the general algorithm was y = a*x + b find  a and b,
> simple example but you get my meaning ) am i missing something ther or is an

We design algorithms so that the amount of data which needs to be kept
secret is clearly identified. "data" here includes things like 'workings
of algorithm, parts of a secret key, third parties', and so on. The rest 
of the data is considered as if it's public. 
(warning - not standard terminology. I want something that includes both
  "algorithm details", "protocol details" and "key material." suggestions
  or pointers?)

Your question, if I've got it right, is "why don't we reverse-engineer a 
piece of software and see what the private key is, even though the algorithm
works well and can't be inverted even though its workings are known." 
The answer is...well

        a) we clearly identify the secret data and then go to great lengths
        not to store it anywhere permanent.

        b) when we do have to deal with secret data, like to perform 
        decryption, we try to use it for as short a time as possible and
        in as difficult to observe a manner as possible. 
i
you may think these are difficult. you're right. see all the fun things
you can do to smart cards. or check out www.cryptography.com 

So in a well-designed system, no DLL would contain the private key of the
user. The "a and b" of your example would be parameters fed to some general
encryption function in the DLL. The "a and b" would be kept by the user 
in some place considered secure, like their head, in a separate (possibly 
        encrypted) file, on a floppy, in their genetic code, etc. etc. 

Not all systems are well designed in this sense. would anyone like to 
give a definition of what it means to say a cryptosystem "sucks" ?



> actual problem but there is a counter measure.
> what would stop me from just debugging a dll and knowing the public key of

Public keys don't give you any ability to decrypt. They allow you to encrypt.
You would need to find someone's private key to retrieve a message (assuming
        that no better way of attacking the system is available). 
Systems that keep a private key lying around in a DLL are not well designed.
If you encounted such a system, then, yes, this works. 
Fortunately, with the popularity of cryptography these days, maybe after
a while we will see fewer such systems.

-David 


------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Tripple DES key length.
Date: 29 Mar 1999 15:54:21 -0500

In article <7dongn$gto$[EMAIL PROTECTED]>,
Mickey McInnis <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
>(DJohn37050) writes:
>|> ...
>|> Because of the meet in the middle attack, a 3 DES key use of TDES has an
>|> effective strength of 112 bits to the known best attack..
>|>...
>
>I think it's a bit of an overstatement to refer to a meet-in-the-middle
>attack as if it is practical.  The storage of 2**56 trial runs, and/or
>searching through 2**56 trials to compare against makes
>it highly impractical to actually do a meet-in-the-middle attack.
>
>Or is there some implementation of a meet-in-the-middle attack
>against even 2-DES that doesn't involve (2**56)+ storage elements?
>That's 512 Million Gigabytes, if you store 64 bits per trial.

This is frighteningly close to practical, however.  

A farm of writable CD-Rs runs about $2/gig, retail.  Cut that
down to $1/gig, wholesale, and you're looking at only about 500M
dollars to store that many elements.  That's certainly inside
a government budget, and might even be doable in a large corporation.

        -kitten

------------------------------

Date: Tue, 30 Mar 1999 08:47:59 +0200
From: Henrik Zawischa <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: How to avoid double random numbers?

Hi there,

I'm rather new to the stuff, so this might be a really simple question
to answer. Nonetheless I'd like to know.

in Bruce Schneiers's Applied Cryptography there is a chapter about
secure elections. On Page 126 it says:
"Each message also contains a randomly generated identification number,
large enough to avoid duplicates with other voters"
What is large enogh? Isn't the meaning of randomness that you can't
predict the numbers, i.e. you can't predict that you won't get doubles?
I mean, there might be just e tiny chance, neglectable in other cases.
But if it happended an I found my ID-number on the list of votes for the
opponets party, I'd be rather displeased. I'd think, that would spoil
the whole effort.

Thank's for any comment

Henrik Zawischa


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 29 Mar 1999 15:17:19 GMT
Reply-To: [EMAIL PROTECTED]

On 29 Mar 1999 03:26:52 GMT, [EMAIL PROTECTED] (STL137) wrote:

>Everything you say is very accurate and concise,

It seems that you are one of the few who understands that.

>but your conclusion (in my opinion) isn't so accurate.

That's because I haven't put it in quantitative terms. I will try to
do that below, albeit in a very sketchy manner.

>We should use statistical tests on
>PRNGs and TRNGs, to make sure they're as good as we need them.

"Standard" statistical tests, those on very small samples compared to
the size of the ensemble, cannot be used to characterize the
randomness or non-randomness of a TRNG.

>Now, if you want to be really picky (I would be)

I do not believe it is being "picky" to realize that purported methods
of characterizing a TRNG is faulty in a significant manner.

>we can run them on say, a quadrillion bits (or on a quadrillion 100bit keystreams).

Now you are getting closer to using statistical tests in a manner that
is effective, by increasing the size of your sample to that of the
ensemble.

BTW, we do seem to agree that such exhaustive testing will only point
out TRNGs that are not truly random. You cannot conclude any
statistical tests, even those from such exhaustive testing, that a
TRNG is truly random. True randomness can only be tested for infinite
sequences.

But in order to disclose any non-randomness due to bit bias of the
ensemble, you must take a very large ensemble, much larger than even a
quadrillion (10^15) 100 bit keystreams. In order to do an ensemble
average for 100-bit sequences, you would need a sample size of order
2^100 = 10^30 sequences, or a quadrillion squared.

>The fact
>that our statistical conclusion can't be completely accurate (all
>statistics are like that) don't invalidate the results.

It does if the conclusions are incorrectly based on false notions.

>You just have to interpret the results with some caution.

I do not believe that one can hide behind the notion of probability
when it comes to assessing non-randomness of a TRNG.

>Running a test on 100
>bits? Sure, I'll be skeptical. However, I'd be inclined to trust a
>statistical test on a quadrillion bits. :-D

Unless you do the ensemble average, you are not doing the tests
correctly.

Consider a uniform Bernoulli process like the symmetric random walk
that occurs in one dimensional diffusion - say ink in water like the
example given earlier. As the number of steps n gets large, the number
of paths that the individual ink particles take becomes exponentially
large.

Physical intuition tells you that only a very small fraction of those
ink particles stay anywhere near the origin, and that the (Gaussian)
spatial distribution becomes very flat over the extent of the distance
of order 2^n (the maximum path length).

IOW, a very significant fraction of ink particles is "abnormal" in the
sense that they exhibit strong 1-bit bias. (Remember that the net
distance from the origin is determined by the excess of steps in one
direction, and that is a measure of the 1-bit bias). Therefore, even
though for every path to the right there is an equal path to the left
and the two cancel each other to give the mean value equal to zero as
expected from the law of large numbers, that does not mean that the
typical sequence is free from bias.

Most sequences are heavily biased (or they would be very near the
origin), and only when you consider the full-blown Ensemble Average do
the paths to the left cancel out the paths to the right to reveal the
bias of the ensemble, which is zero for a uniform random walk. If all
you do is sample a small number of sequences compared to the total
number in the ensemble, you cannot expect to characterize the behavior
of the entire ensemble correctly.

The size of the ensemble for 100 steps (which is an unrealistically
small keystream for typical stream ciphers) was given above as of
order 10^30, which is far too large for statistical testing. Testing
on a very small fraction of that ensemble leads to erroneous results. 

When you consider typical keystreams of size 10^6 bits, the number of
sequences in the ensemble that you must test to characterize the TRNG
properly, is astronomically large - of order 2^{10^6}. I cannot even
convert that to a decimal base on my Texas Instruments scientific
calculator because it is so large.

BTW, people failed to spot an error in my earlier post where I had
stated:

+++++
Even for 10 bit sequences, where runs of 10 zeros is not all that
unlikely, emsemble averages to test for true randomness requires of
order 1 million sequences of 10 bits. And 10 bits is not very many
bits for a typical keystream in crypto.
+++++

That should have read:

Even for 20 bit sequences, where runs of 20 zeros is not all that
unlikely, emsemble averages to test for true randomness requires of
order 1 million sequences of 10 bits. And 20 bits is not very many
bits for a typical keystream in crypto.

2^20 = 10^6.

Bob Knauer

"What right does Congress have to go around making laws just because
they deem it necessary?"
- Marion Barry, Mayor of Washington DC

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to