Cryptography-Digest Digest #875, Volume #8       Sun, 10 Jan 99 10:13:03 EST

Contents:
  Re: Twofish vs DES? ([EMAIL PROTECTED])
  Re: Twofish vs DES? ([EMAIL PROTECTED])
  Re: Birthday Attack calculations. ([EMAIL PROTECTED])
  Re: Practical True Random Number Generator ("hapticz")
  Re: On the Generation of Pseudo-OTP (fungus)
  Re: Birthday Attack calculations. ([EMAIL PROTECTED])
  Re: Practical True Random Number Generator ("Trevor Jackson, III")
  Re: Birthday Attack calculations. ("Trevor Jackson, III")
  Re: Practical True Random Number Generator (Bill)
  Re: On the Generation of Pseudo-OTP ("Trevor Jackson, III")
  Re: On the Generation of Pseudo-OTP (Bill)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Twofish vs DES?
Date: Sun, 10 Jan 1999 10:38:41 +0100

TERACytE wrote:
> 
> Which is better?  Twofish or DES?

DES is much slower than twofish and it's key is smaller.

On the other hand there was not very much time to test twofish - it's
strong against all attacks known to the developers, but there may be
unknown weaknesses.

At the moment I'd suggest to use 3DES if speed is no problem and
blowfish or IDEA otherwise. In few years twofish may be used.


Andreas Enterrottacher

[EMAIL PROTECTED]
[EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Twofish vs DES?
Date: Sun, 10 Jan 1999 11:43:01 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (JPeschel) wrote:
> >[EMAIL PROTECTED] writes, in part:
>
> >NEITHER
> >if you want something more modern than these weak
> >old ciphers
>
> This is nonsense, David.
>
> Joe
>
> __________________________________________
>
> Joe Peschel
> D.O.E. SysWorks
> http://members.aol.com/jpeschel/index.htm
> __________________________________________
>
>


  Joe the question was a B. S. troll in the first
place how could anyone take it as a real question.



http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Birthday Attack calculations.
Date: Sun, 10 Jan 1999 11:49:10 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] wrote:
>
> >In article <[EMAIL PROTECTED]>,
> >  [EMAIL PROTECTED] wrote:
> >> [EMAIL PROTECTED] wrote:
> >
> > snip...
> >
> >
> >    Then run 100 cases after your done with your low level
> >testing. I you can't run 100 cases your method sucks.
> >if you get any collisions at all in the 100 cases then you
> >FUCKED up the CODE. But you MUST check the code for some
> >cases that you actually suspect the code to run on.
> >  It reminds me of work when we bought code that was to run
> >on a DEC machine using VT100 terminals. THE CRAP  didn't work
> >we talked to the company on phone several times. They claimed
> >it was developed on DEC machines using VT100 terminals after
> >months when one of there representatives came out to our site
> >after several visits they brougth one of there VT100 termainals
> >the CRAP ran. But they had used a VT100 clone there code
> >would not work on a real VT100 DEC terminal. I think the
> >company died I sure hope so. But you have to run REAL TESTS
> >on the real one you expect the code to be used. Even if your
> >a pompous asshole and irrigantly expect no problem.
> >
> There is not enough computing power on the planet  to do a full
> collision test on a 256 bit hash.  Do the math yourself.
> >
> >  You can expect all you want. Yor should more like a manager than
> >a real programmer with much common sense.
>
> I think there is an insult in there somewhere, I guess that means I
> have arrived.
>

  Actual I am disapointed that some one else caught your mathematical
erros while I was just wasting my time telling you over and over
that a FULL collision test on your 256 bit hash was not needed.
But you do need to try a few cases just to verify that ZERO occur
but your obviously to pigheaded to understand this point. If
this makes you think you have arrived then go ahead your arrived
just like mr H.

> Fred Van Andel
>

David Scott
P.S. I never did trust the stability of a Van compared to
a Car.


http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "hapticz" <[EMAIL PROTECTED]>
Subject: Re: Practical True Random Number Generator
Date: Sun, 10 Jan 1999 08:17:56 -0500

random?? hmm, hows about a laser pointed into the sky and reflected
atmospheric refractions detected with small photo-sensor? or point it at a
body of water
sampled over periods of extended lengths to build a composite database.

or maybe rather than random, perhaps a completely ordered world where
everyone is not trying to deceive each other?  ;-))

--
best regards
[EMAIL PROTECTED]





------------------------------

From: fungus <[EMAIL PROTECTED]>
Subject: Re: On the Generation of Pseudo-OTP
Date: Sun, 10 Jan 1999 14:25:18 +0100



"Trevor Jackson, III" wrote:
> 
> So use the RNG convention.  POTP for the fake and TOTP for the real
> thing. 

Whose convention is that?

A POTP is a "stream cipher".

-- 
<\___/>
/ O O \
\_____/  FTB.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Birthday Attack calculations.
Date: Sun, 10 Jan 1999 11:53:44 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
> >> The whole point of testing on small hashes and extrapolating is that
> >> is computationally impossible to do a birthday attack on a large hash.
> >> For a 256 bit hash you will need to create more than  2^128 hashes
> >> before the odds of a collision reach 50%.
> >>
> >> Fred Van Andel
> >
> >Something is wrong with this paragraph.  Either the attack is not a birthday
> >attack, or the odds of a collision are much higher.  An exhaustive search of
> >a 2^256 element state space has a 50% chance of a match agains the target of
> >the search.  This is not a birthday attack.
> >
> >A birthday attack searches for any matching pair rather than a match against
> >a target.  Thus a birthday attack searching N states implies generating O(N)
> >states, but comparisons among N states means O(N^2) comparisons (assuming
> >the states are not generated in an order that allows comparison against
> >neighbors only).  Thus for N small the generation cost may dominate the
> >overall cost of the search but for even a 40 bit key the comparison cost
> >will dominate due to the 2^79 comparisons required.
> >
> >The odds of finding a matching pair among N of M states is the product over
> >0 to N-1 of the expression (M-i)/M.
> >
> >    odds = 1;
> >    for i=0 to N-1
> >        odds  = odds * (M-i)/M
> >
> >Thus a birthday attack can find a matching pair of elements much faster than
> >an exhaustive attack can find a match for a chosen target.
>
> Lets see if I understand you correctly. First you generate a pair of
> hashes and compare them to each other.  If they don't match then you
> throw them away and generate a new pair. Repeat until you find a
> match.
>
>

  I think you wasting your time talking to Fred he belongs back
in the Stone age with his wife Betty and his friend Barney.
He does not waant to use his brain.



http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

Date: Sun, 10 Jan 1999 08:36:36 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Practical True Random Number Generator

hapticz wrote:

> random?? hmm, hows about a laser pointed into the sky and reflected
> atmospheric refractions detected with small photo-sensor?

Atmospheric phenomena (weather and climate) are replete with regularities.
The time scale for sampling has to bery very long (slow) if you want to
generate independent samples.

> or point it at a
> body of water
> sampled over periods of extended lengths to build a composite database.

Err, waves are BY DEFINITION periodic phenomena.

> or maybe rather than random, perhaps a completely ordered world where
> everyone is not trying to deceive each other?  ;-))

Just when is the due date on this project?


------------------------------

Date: Sun, 10 Jan 1999 08:31:51 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Birthday Attack calculations.

Fred Van Andel wrote:

> "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
> >> The whole point of testing on small hashes and extrapolating is that
> >> is computationally impossible to do a birthday attack on a large hash.
> >> For a 256 bit hash you will need to create more than  2^128 hashes
> >> before the odds of a collision reach 50%.
> >>
> >> Fred Van Andel
> >
> >Something is wrong with this paragraph.  Either the attack is not a birthday
> >attack, or the odds of a collision are much higher.  An exhaustive search of
> >a 2^256 element state space has a 50% chance of a match agains the target of
> >the search.  This is not a birthday attack.
> >
> >A birthday attack searches for any matching pair rather than a match against
> >a target.  Thus a birthday attack searching N states implies generating O(N)
> >states, but comparisons among N states means O(N^2) comparisons (assuming
> >the states are not generated in an order that allows comparison against
> >neighbors only).  Thus for N small the generation cost may dominate the
> >overall cost of the search but for even a 40 bit key the comparison cost
> >will dominate due to the 2^79 comparisons required.
> >
> >The odds of finding a matching pair among N of M states is the product over
> >0 to N-1 of the expression (M-i)/M.
> >
> >    odds = 1;
> >    for i=0 to N-1
> >        odds  = odds * (M-i)/M
> >
> >Thus a birthday attack can find a matching pair of elements much faster than
> >an exhaustive attack can find a match for a chosen target.
>
> Lets see if I understand you correctly. First you generate a pair of
> hashes and compare them to each other.  If they don't match then you
> throw them away and generate a new pair. Repeat until you find a
> match.
>

Hardly.  Throwing away samples wastes the effort invested in creating them.  For
the purpose of the reply I assumed that each sample would be stored after
comparison with the previously stored samples.  Thus we need to store N samples,
and compare (N^2)/2 times.

> But by this method each pair only has one chance in N of being a
> collision and therefore would require N/2 total comparisons to reach
> the 50% level.  When we are talking large hashes this becomes a hugh
> number.
>
> Another way is to generate all the hashes up front, sort them and then
> compare.  For example with a 64 bit hash you could generate your 1.2 *
> 2^32 hashes and save them to a file.  Sorting the file will take
> roughly 32 * 2^32 compares for a total of approx 2^37 operations.

> This is much more feasible that the 2^63 operations required when

> testing pairs of data.  You could run approximately 2^26 of the
> sorting tests in the same time as it takes to run one of the pairs
> test.
>
> Bear in mind that this method does not give you the average but rather
> how often you are successful at the 50% level.  This is a slightly
> different number.  If you want the true average you must generate more
> numbers and then determine afterwards where the match occurred.
>
> You refer to 2^79 compares required for a 40 bit hash.  This the
> absolute worst case in which no collisions are generated even after
> all possible hashes have been created.  The odds of reaching the end
> of the hash without a collision are extremely remote. Even for a hash
> as small as 8 bits, the odds of generating all possible hashes without
> a collision is about 1 in 10^110.  For our 40 bit example most of the
> cases will cluster around 20 bits that corresponds to about 2^39
> compares. That is much more manageable.

You are still using the numbers for exhaustive search.  The birthday attack
numbers are much smaller.  As I don't have an arbitrary precision calculator at
hand I'll have to do some work to generate them accurately.  I'll try to generate
them and post here


------------------------------

From: Bill <[EMAIL PROTECTED]>
Subject: Re: Practical True Random Number Generator
Date: Sun, 10 Jan 1999 06:17:33 -1000

Coin tossomg can be a cheap way to get random numbers. In Santa Cruz, 
California there are 2^6 homeless people who "will work for food". If 
only we could harness the power of coin tossing, we could make the 
Pacific Garden Mall a better place. Some of these unwashed burnouts sit 
there for hours with coins in a paper cup, ready to endow the world of 
cryptography with an exhastible supply of TRN. The coins are tossed with 
great skill and apathy. 

What is needed is not microsensors to detect air molecules, simple TV 
cameras and image processing software on obsolete 486 PCs would be a 
cheap solution. The bums toss 'em and the software counts 'em. One hand 
does not wash the other, one hand gives coins, and the other hand tosses.

------------------------------

Date: Sun, 10 Jan 1999 09:32:43 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: On the Generation of Pseudo-OTP

R. Knauer wrote:

> On Sat, 09 Jan 1999 16:01:04 -0500, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>
> >So use the RNG convention.  POTP for the fake and TOTP for the real
> >thing.  We acept the fact that there are good PRNGs and bad PRNGs.
> >Similarly we should expect there to be more or less good POTPs.  It's a
> >terminology issue not a fundamental issue.
>
> I do not care what terminology is employed as long as is not
> misleading. In the case of the term "pseudo-OTP", I consider it very
> misleading because it implies that there are good OTPs and not so good
> OTPs.

Actually, I think this is erroneous.  The name OneTimePad specifies a
single-use key.  The best possible key is one in which each key element (char
or bit) is independent of the others.  But having typists pound on keyboards,
as the Russians did, does not produce independent key elements.  They had a
True-OTP until they started reusing keys, but their key quality was nowhere
near perfect.

Thus I believe that a true OTP is one with a single-use key.  The quality of
the key does not turn a single-use key system into an inferior stream cipher.
It is still a true OTP.

> There is only one possible OTP cryptosystem that deserves that name,
> which is referred to simply as the OTP cryptosystem because there is
> no other kind.  All pretenders to the OTP are called Stream Ciphers.
> The term "pseudo-OTP" is an oxymoron.
>
> >No.  From the first principles of information theory, information --
> >entropy -- is identical to unpredictability.
>
> You must be careful not to confuse unpredicitbility in crypto with
> obscurity.
>
> For example, according to Greg Chaitin (op. cit.), a sequence that is
> irreducible in size algorithmically is random in his theory. That ism
> there is no way to decide formally what each of the bits is supposed
> to be.
>
> That seems to imply that if I take some plaintext and compress it with
> the best compression algorithm available, that it is therefore
> unpredictable. But we know better, because all that one needs to do is
> uncompress it and the plaintext is exposed.
>
> In fact it should be possible using text compression to achieve 1 bit
> of entropy per bit of compressed data. But that hardly means the text
> is unpredictable. High entropy may be a necessary condition, but it
> appears not to be sufficient.

It is not that simple.  The key phrases you used was "all one needs to do is
uncompress it" and "it should be possible to achieve 1 bit of entropy per bit
of compressed data."  Inside a compressed archive there exists all kinds of
structure to guide the decompression process.  If you strip out only the
compressed text you will not be able to uncompress it without some loss of
fidelity.

I'm not sure 100% entropy density is possible in a lossless system.  Note that
a hash is a lossy system to it is easily possible to get 100% entropy density.
In fact the statement that there is irreducible structure to text implies that
100% entropy density is literally impossible.

Now which way do you want to have it?

> >A really good TRNG has 1 bit
> >of entropy per bit of data.
>
> But is that the same as saying that a TRNG is capable of outputting
> every possible sequence of a given length equiprobably? A hash
> algorithm can increase entropy to 1 bit in principle, yet it is not
> totally secure for crypto.

Here you have to provide an example.  The claim that a hash contains structure
is identical to the claim that the entropy density is less than 100%.  Can you
show any system with 100% entropy density that is not crypto-secure?

I believe that you cannot.  100% entropy density means that the bits are
unrelated.  Literally unpredictable.  I suspect that criticism you got in
previous discussions, which I have not reviewed, was based on the claim that
you had failed to reach 100% entropy density, rather than the claim that 100%
entropy density may be inadequate.

> If you can say the same about a stream cipher, then it qualifies as a
> suitable generator for OTPs. But can you say that about stream
> ciphers? Not in general you can't.
>
> That is the whole purpose of this thread - to explore alternatives to
> the TRNG. I am curious to know if a "digit extraction generator" would
> qualify.
>
> > But that is based on very careful sampling of
> >the physical process.  Even quantum events indicate correlations.
>
> Not all do.
>
> Some kinds of radioactive decay are completely uncorrelated. They are
> the result of "spontaneous emission" caused by "vacuum fluctuations",
> the latter which are not real external events. The same is true of
> spectral emission from the excited state of an atom.

Well, we have no idea whether there are any correlations.  This is a far cry
from the claim that we can prove that there are none.

> >Some of which appear to violate the speed of light rule.
>
> Those are not *real* - they are "virtual" processes. Real external
> world processes must obey special relativity.

It would be nice to force the universe to match our theories, but it just ain't
so.  There is a class of dual emission events in which two high energy
particles are emitted in exactly opposite directions.  The properties of the
particles are related.  If you measure one particle its wave function
collapses, and so does the other particle!.  The interesting thing about this
is that the collapse of the second particle's wave function occurs at the same
time as that of the first particle.  No matter how far apart they are.  This
appears to violate the speed limit laws.

The technical term for such correlation among particles is entanglement.
Entangled particles have correlated behavior.  Yet entanglement is created
everytime a pair of particles interact.  So (I speculate), in theory the entire
cosmos may consist of heavily entangled particles.  Thus there is room for an
arbitrarily large amount of correlation at the quantum level.

We just can't prove it or disprove it.  Yet.

> There is an area where such an apparent violation appear to be
> involved with non-local phenomena such as in the experiemnts of Alain
> Aspect. But that is still quite contoversial as it is mixed in with
> Bell's Theorem and Hidden Variables.

I do not recognize these references, but that is just my ignorance.

> >Now text also has entropy.  Raw text has less than one bit of entroy per
> >bit of data, but there is entropy -- unpredictability -- available in the
> >text.  If you choose a poor sampling mechanism or over-sample the data
> >you'll get less entropy per bit than the desired 1:1.  But there is no
> >question that entropy is available.
>
> The patterns in text are sufficient to break book ciphers trivially.

Raw text sure.  But not in text that has been hased to 100% entroy density. By
definition, the bits of the hash are independent.

> >Thus, at the fundamental level there is no difference.  Any differences
> >are part of the respective sampling methods not the source of the entropy.
>
> I do not see what you are saying. You seem to be saying that there is
> no fundamental difference between an OTP and a book cipher. Yet there
> is most defintely a fundamental difference - the OTP is proveably
> secure and the book cipher is trivally breakable.

A perfect OTP used correctly is provably not subject to analysis.  This is a
much weaker condition than true provable security.

An imperfect OTP (with lousy key quality such as the Russians generated) is
still an OTP.  It is theoretically breakable.

An incorrectly used OTP (key recycling) is certainly breakable.

> >Please amplify the natore of the error in light of the comments above re
> >terminology and fundamentals.
>
> I have tried many times. Please refer to my comments above.
>
> >> We already discussed/debated that whole matter a year ago and
> >> concluded that a text cipher, no matter how you buggered it, is still
> >> fundamentally different from an OTP generated by a TRNG.
>
> >Wrong,
>
> Then we wait with great anticipation for you to show us otherwise. But
> I warn you that this road is littered with the corpses of many failed
> attempts over the years.
>
> >If this conclusion is correct then we have an important addition to the
> >science of information theory.  What are the units by which we measure the
> >amount of magical correlation in a text stream?  Do newpaper text, poetry,
> >source code, and TV scripts all share the same degree of this special
> >correlation?
>
> Please clarify what you mean here. You seem to be saying that text
> does not have any correlations, which is patently untrue. Perhaps some
> examples are in order.

Of course natural text has correlations.  Preditable bits.  Less than 100%
entropy density.  However, raw natural text does contain information.
Entropy.  Unpredictable bits.  It is certainly possible to sample text to
extract it and use it as a key for an OTP or other cipher.  The only hard part
about it is judging how much entropy exists in the original text.

If the original text was generated by a person picking a button on the keyboard
an leaning on it for a random period of time, we probably have a source of very
diffuse entropy.  An entire sample would have less than 10 bits in it, a few
for the key and a few for the repeat count.  If the text was sampled from the
front pages of the NY Times, we probably have closer to the classic 1.2 bits
per character.

If you can specify the required theoretical density enhancement I believe I can
specify a theoretical procedure to realize it.

> >This stuff that is inseperable from text reminds me of the magical fluid
> >that was part of all matter and that created heat when rubbed
> >(philosgin?).  Does it have any other special properties?
>
> Phlogisten IIRC.
>
> Ironically, as happens all the time in science, such alchemy had a
> grain of truth in it. On the way to inventing thermodynamics there was
> a theory of "chryos" (a mystical substance which produced cold), that
> led to some mathematics which were later incorporated into
> thermodynamics.
>
> BTW, did you know that Issac Newton practiced official alchemy? He
> spent inordinate amounts of his time in a fully equipped "laboratory"
> working on alchemy, trying to find the philosophers stone and all. His
> theory of gravity that he concocted to explain planetary motion
> incorporated a concept straight out of the mysticism of alchemy -
> namely, "action at a distance".
>
> Before Newton, there was no such concept in science, only alchemy. In
> fact there was no such concept of the "infinitesimal"  which is
> central to calculus. There were furious debates over such a concept
> and one was not allowed to speak of something that vanished for fear
> of excommunication by the Catholic Church. There is a book on all this
> called "Newton - The Last Sorcerer".

Well, Newton had a hand in those debates.  A nefarious one.  He and Leibnitz
(sp?) both claimed to have invented the fundamental concepts of calculus.
While Leibnitz had precedence (as seen from history; at the time comm delays
provided enough skew to cloud the issue), Newton had a bully pulpit to
strengthen his claim.

Appears Newton wrote a series of articles regarding the attribution issue in
the journal of the British Society of which he was the head.  The cute trick
was that he sumitted the articles under a fake name.  Then he wrote reviews of
the articles and submitted them under a second name.  Then he wrote letters to
the editor about the reviews under still more names.

He won the PR war.  Later in life he stated that he greatly enjoyed "breaking
Leibnitz's heart".  Feet of clay...

> Contemporary scientists will be considered the sorcerers of our era
> when viewed by scientists a few centuries in the future. I am certain
> they will consider our quantum and relativity theories as just so much
> silly mysticism.
>
> Bob Knauer
>
> "We hold that each man is the best judge of his own interest."
> --John Adams




------------------------------

From: Bill <[EMAIL PROTECTED]>
Subject: Re: On the Generation of Pseudo-OTP
Date: Sun, 10 Jan 1999 06:29:39 -1000

R. Knauer wrote:
> That seems to imply that if I take some plaintext and compress it with
> the best compression algorithm available, that it is therefore
> unpredictable. But we know better, because all that one needs to do is
> uncompress it and the plaintext is exposed.
> 
> In fact it should be possible using text compression to achieve 1 bit
> of entropy per bit of compressed data. But that hardly means the text
> is unpredictable. 

Looking at my copy of "Elements of Information Theory" by Cover and 
Thomas, Figure 5.1 chose a Venn diagram of sets of compression 
categories: all codes, non-singular codes, uniquely decodable codes, and 
instantaneous codes. Huffman compression uses instataneous codes so that 
a long string of 1s and 0s can be parsed to recover each symbol code 
without any separator codes being needed. This leads to non-random 
looking codes like 

e=0
a=10
b=110
r=1110
q=11110
x=111110

This is created so that frequently used letters have short codes. S when 
you
talk about "the best compression algorithm available" Huffman codes is 
clearly not a good choice as a source of POTP carrier. Which compression 
algorithm would you recommend?

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to