Cryptography-Digest Digest #133, Volume #13      Fri, 10 Nov 00 05:13:01 EST

Contents:
  Re: Hardware RNGs (Terry Ritter)
  Re: Hardware RNGs (David Schwartz)
  Re: Hardware RNGs (David Schwartz)
  Re: hacker...beware ("madrat")
  Re: MY BANANA REPUBLIC (Runu Knips)
  Re: About blowfish... (Runu Knips)
  Re: Rijndael question (Paul Crowley)
  Re: Announcement: One Time Pad Encryption - 0.9.3 - freeware (Richard Heathfield)
  Re: Q: Computations in a Galois Field (Mok-Kong Shen)
  Re: On obtaining randomness (Mok-Kong Shen)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Hardware RNGs
Date: Fri, 10 Nov 2000 05:04:09 GMT


On Thu, 09 Nov 2000 19:57:18 -0500, in
<[EMAIL PROTECTED]>, in sci.crypt Steve Portly
<[EMAIL PROTECTED]> wrote:

>Terry Ritter wrote:
>
>> On Thu, 09 Nov 2000 15:00:34 -0500, in
>> <[EMAIL PROTECTED]>, in sci.crypt Steve Portly
>> <[EMAIL PROTECTED]> wrote:
>>
>> >David Schwartz wrote:
>> >
>> >> Steve Portly wrote:
>> >>
>> >> > For applications that are network intensive, timing packets would be a better
>> >> > alternative than timing interrupts.  Network jitter is over 100 times greater 
>than
>> >> > system jitter  so the laws of physics give you a natural firewall.  "One cycle 
>count"
>> >> > is easily lost to signal rise times even inside your system case.  I doubt 
>anyone would
>> >> > be able to monitor TS intervals from a distance of more than a few feet.  This 
>is
>> >> > sci.crypt so a detailed explanation of system jitter would probably be off
>> >> > topic.
>> >>
>> >>         These are measuring the same thing. So it's not an alternative.
>> >>
>> >>         DS
>> >
>> >An assembly language call to int 13 takes a different amount of time than a packet 
>arrival.
>> >The key is to find the minimum time period that will always produces at least one 
>bit of
>> >entropy.
>> >Since 1995 CPU frequency wander and system jitter have become a source of entropy.
>> >
>> >http://www.ednmag.com/ednmag/reg/1995/070695/graphs/14dfcfga.htm
>> >
>> >With my crude analysis I found that it takes about 40 microseconds to get a bit of 
>entropy.
>>
>> I for one would like to see the details of that analysis.
>>
>> Nobody denies that crystal oscillator noise-jitter occurs.  But I deny
>> that it can be detected in software on a conventional system.
>>
>> There is another form of "jitter" which is just the expected
>> relationship of a signal of one frequency sampling a signal of another
>> frequency.  That occurs independent of quantum events, and has no
>> continuing randomness at all.
>>
>> >My window of error could be anywhere from 10 to 100 microseconds depending on the 
>speed,
>> >type of system, and entropy rollup you use.  I tested on a pentium 90, 233, and 
>350mhz
>> >platforms with good results (a little slower on the 90).
>>
>> First of all, of course, we have to measure things that our Opponents
>> cannot measure from outside the security shield.  If the network is
>> open, they get to measure packet times just like we do.
>>
>> Next, a real measurement of absolute time generally depends upon
>> hardware timers which are set up to clear, enable, disable, and be
>> read.  Simply sampling the "current time" in software is not the same
>> thing at all, and is not enough.
>>
>> In particular, if we sit in a software loop, polling the indication of
>> interest, a whole lot of other things are going on.  The process we
>> are running is swapped in and out; interrupts are occurring; memory
>> refresh is occurring.  And, in general, when these things occur, we
>> are not really polling the state any more -- we are doing something
>> else.  And while these other things may complicate the numbers, they
>> are generally deterministic, not fundamentally random.
>>
>> ---
>> Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
>> Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM
>
>It all boils down to how long it takes you to roll up enough entropy to satisfy your 
>needs.

First of all, an exponential wait will kill you.  To collect the next
bit of the difference between oscillator frequencies, you must wait
twice as long as you did for the last bit.  Just how long do you think
this can possibly be a reasonable source?

And then, the sequence you get is pretty much the same as it was the
last time you did it from the start.  The equipment has not changed,
the oscillators have not changed, their heating has not changed;
exactly what different thing about this can you expect to make the
result different?  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Thu, 09 Nov 2000 21:40:40 -0800


Terry Ritter wrote:
> >Terry Ritter wrote:
> >>
> >> On Thu, 09 Nov 2000 15:34:56 -0800, in
> >> <[EMAIL PROTECTED]>, in sci.crypt David Schwartz
> >> <[EMAIL PROTECTED]> wrote:
> >>
> >> >Terry Ritter wrote:
> >> >
> >> >> There is another form of "jitter" which is just the expected
> >> >> relationship of a signal of one frequency sampling a signal of another
> >> >> frequency.  That occurs independent of quantum events, and has no
> >> >> continuing randomness at all.
> >> >
> >> >       That is not true.
> >>
> >> Your statement is false.
> >
> >       I stand by my statement, and I'll provide an example.
> >
> >       You have two digital clocks, one at about 10Mhz and one at about 20Mhz.
> >They are uncorrelated, but we'll assume that their precise frequency
> >never ever changes.
> 
> Two signals of fixed frequency are inherently correlated, and we can
> use that if we start measuring them at the same time:  Something which
> happens at cycle 10,000,307 on the 100 MHz signal also happens at
> cycle 20,000,614 on the 200 MHz signal.  Nor do the frequencies have
> to be an integer relationship.

        Why 20,000,614? Why not 20,000,613 or 20,000,615? Remember, the
frequencies are not precisely 10Mhz and 20Mhz. They are just perfectly
constant.
 
> >       You let the slower clock run for exactly 100 cycles and count the
> >cycles of the slower clock. You get 203. Now you let the slower clock
> >run for 10,000 cycles and count the cycles of the slower clock. You get
> >20,301. You keep repeating using more and more cycles, when have you
> >exhausted the entropy?`
> >
> >       So how can you say it has "no continuing randomness at all"?
> 
> For one thing, the next time you do it again from the start, you get
> just about the same result.  That does not sound very entropic to me.

        just about the same != the same

        You are falling into the fallacious chain of reasoning that entropy
requires all results be equally probable. A messed up room has tons of
entropy, however the result where everything in its place is very
improbable.

        Consider 70 particles each randomly placed in a room. The situation
where 35 particles are on the left and 35 on the right is far more
probable than the case where all 70 are on the left. That doesn't mean
there isn't entropy in the positioning.

        Yes, repeat the process and you will get "just about the same" result.
The entropy is in the "just about". If the two frequencies are
independent, odds are they'll never line up exactly the same way twice.
(In other words, if you start a trial at time T and you start another
trial at any other time, the two trial results will eventually differ).
 
> Clearly, your example is no example of randomness.  The statement you
> stand by is wrong.

        I'm not sure if you are being deliberately dense or really are dense. I
can't imagine how I can be any clearer or more obvious than I've been.

> >> >Ideally, the two frequencies are real numbers and
> >> >their exact ratio contains an unlimited amount of randomness.
> >>
> >> Conditionally true but impractical, since each additional bit of this
> >> ratio takes twice as long to detect.  Moreover, the ratio is
> >> approximately the same for every new use, so the same values will come
> >> tumbling out, which is hardly fundamental randomness.
> >
> >       I accepted a very strong restriction to demonstrate that a particular
> >weak point could be made even with it. You then demonstrate to me what
> >that restriction restricts. I know that. I didn't accept the restriction
> >as true, I simply said that even with this incredibly strong
> >restriction, your point (absence of continuing randomness) is _still_
> >wrong.
> 
> I have no idea what strong restriction you are talking about.

        The restriction being that the two frequencies are perfectly constant.

> Harvesting the difference between oscillator frequencies by simple
> comparison inherently requires exponential time.  If that is what you
> have "accepted," you would seem to have little other choice, except of
> course deception and dissembling.

        So, it requires exponential time, so what? Are you finally admitting
that the entropy is there and can be measured to any desired level of
accuracy?
 
> You cannot measure the variation exists in crystal oscillators with
> software in a multitasking operating system, or in the presence of
> hardware interrupts, because the computer system itself will have far
> more variation than the oscillators.  That variation is, however,
> deterministic, and not fundamentally random.

        THAT MAKES NO DIFFERENCE. You can add any amount of deterministic data
you want to random data, and the data is still random. Suppose I have a
stream of truly random integers and I add a stream of predictable
integers to them, the final results are as unpredictable as the
unpredictable integers. All the predictable stuff does it add
predictable amounts of delay to the unpredictable amounts of delay.

        Now, I've said this same thing at least four times now. And it's so
simple anyone can understand it. So your refusal no longer seems to be
any sort of legitimate difference of opinion or understanding. So I'm
starting to seriously question your motives.
 
> >> >Each time
> >> >you compare them digitally, it is an independent event that gives you a
> >> >better estimate of the real ratio. Thus each sample contains additional
> >> >entropy, but less and less.
> >>
> >> Even "independent events" may well be correlated.  For example,
> >> consider the situation with wrist watches:  Surely, every watch keeps
> >> a different time.  Yet if we ask "when will Bob's watch show 4PM," our
> >> best bet is that it will be very close to when our watch shows 4PM.
> >> So even though watches are not -- and *can* not -- be synchronized in
> >> an absolute sense, they are indeed *correlated* with other
> >> time-keepers.  Which, of course, is the whole point.
> >
> >       Irrelevent. Correlation decreases entropy but doesn't remove it. See
> >the example in my first paragraph.
> 
> Your example was wrong there, and is wrong here.

        You've yet to show how.
 
> One unknown frequency can be tested against another, but harvesting
> this requires exponential time.  We must continually invest twice as
> much time as the previous bit to get the next one.  And then we find
> the "random" sequence to be pretty much the same as the one we got the
> last time we did this.

        You are contradicting yourself. First you say that we do in fact keep
getting entropy, then you say we don't.
 
> >> A very similar situation occurs with independent crystal oscillators,
> >> with the exception that different frequencies will be involved.  But
> >> approximately what those frequencies should be will be known, and
> >> every bit we get out (in exponential time) further resolves the actual
> >> exact relationship, a relationship fixed by the particular devices in
> >> that equipment.
> >
> >       Right. So we keep getting more and more entropy out, even if the
> >frequencies _never_ change. Of course, in real life, the frequencies do
> >change, so we get _even_more_ entropy out.
> 
> Nope.  That is impractical because there is a continual exponential
> increase in the effort required to measure the difference ever more
> finely.

        Of course this is impractical, that's because we started with
impractical assumptions. I am not saying that you can design practical
systems with unpractical assumptions. I'm saying even with your
ridiculous assumption that the frequencies never change, you can still
get out as much entropy as you need. In other words, there is tons of
entropy there.
 
> Oscillator frequencies may drift, but they may drift in pretty much
> the same way they did the last time the equipment was turned on.  If
> that is your idea of "entropy," I would say that your vision is rather
> limited.

        You throw around words like "pretty much". What you don't see
(deliberately, I know) is that anything that can only be described in
statistical or approximate terms _is_random_ to the extent that it can
only be described in statistical or approximate terms.

        DS

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Thu, 09 Nov 2000 21:44:27 -0800


Terry Ritter wrote:

> First of all, an exponential wait will kill you.  To collect the next
> bit of the difference between oscillator frequencies, you must wait
> twice as long as you did for the last bit.  Just how long do you think
> this can possibly be a reasonable source?

        4096 bits of entropy is adequate for pretty much any purpose I can
imagine. In any event, in most real world cases, you get entropy as you
continue to work, so you really only have a problem with an entropy
shortage during startup.
 
> And then, the sequence you get is pretty much the same as it was the
> last time you did it from the start.  The equipment has not changed,
> the oscillators have not changed, their heating has not changed;
> exactly what different thing about this can you expect to make the
> result different?

        There heating has changed. Two fans never spin up the same way twice.
The temperature in a room is never the same twice. The distribution of
heat over the surface of the crystal is never the same twice. As you
admit, it's "pretty much the same". The fact that it's someone can only
guess "pretty much" what it is is where the entropy is.

        DS

------------------------------

From: "madrat" <[EMAIL PROTECTED]>
Crossposted-To: 
alt.lang.basic,alt.permaculture,alt.surfing,alt.surfing.europe.uk,aus.computers.linux,comp.os.linux.setup
Subject: Re: hacker...beware
Date: Fri, 10 Nov 2000 05:55:36 -0800

Sorry about that Mr hoarde...........................
I was useing "you" as an alias whilst roaming around the country side the
other day...........;)

madrat
X X X



------------------------------

Date: Fri, 10 Nov 2000 09:35:58 +0100
From: Runu Knips <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: MY BANANA REPUBLIC

"SCOTT19U.ZIP_GUY" wrote:
>   For those of you not in the US let my explain.
> The Clinton machine has decided to make GORE president.
> Many elctions in my country are rigged. Chicago Daly city
> is famous for having rigged elections. They use to have
> a saying. In chicago the dead not only vote they vote
> many times.
>  Apparently they didn't stuff enough ballots in FLorida.
> They obviously added or exchanged more ballots in the last
> recount. But You can bet your sweet ass the longer it takes
> to recount the democrats will perfect the stuffing until
> Gore wins. Cheating is a way of government in my country.
> But we have the balls to tell everyone else how to run an
> election. By the way the democrats desinged the ballot
> there bitching about.
>  THe next recount if necessary will be desinged to give it
> to GORE.

If the elections in Florida would have been rigged, the
result would be CLEAR. Or the people which manipulated
it are very incompetent losers.

And if the US would have a reasonable election system,
Gore would be president now anyway.

Just my 0.02$ of logic ...

------------------------------

Date: Fri, 10 Nov 2000 09:42:38 +0100
From: Runu Knips <[EMAIL PROTECTED]>
Subject: Re: About blowfish...

"Cory C. Albrecht" wrote:
> I'm looking at the source for SSLeay (0.9), and the blowfish algorithm
> specifically. I think I understand what it's doing, except for one part...
> 
> Depending on what one defines BF_ROUNDS to, you can get either 16 or 20 round
> encryption, and they struct for the keys is defined thusly:
> 
>     typedef struct bf_key_st {
>         BF_LONG P[BF_ROUNDS+2];
>         BF_LONG S[4*256];
>     } BF_KEY;
> 
> My problem comes about in BF_set_key() when the initial key bf_init is copied
> into the initially empty key provided by the user - bf_init.P is only 18 long,
> 
> seemingly made only for 16 round blowfish. Anything more than 16 ( + 2) will
> get written into key.S, and then get overwritten later on when key.S is
> filled.
> 
> Can I get a 20 round version of bf_init from somewhere? Or is there a reason
> why it has too few? Why? (Feel free too imagine a little kid going "why! why!
> why! why!... :-)

Blowfish is defined for 16 rounds.

Too, Blowfish is already extremely secure for 16 rounds anyway, so why
use 20 ?
Blowfish would AFAIK already be perfectly secure with 8 rounds.

Using 20 instead of 16 rounds would be IMHO a too small gap to get
noticeably
more security, btw. If I would doubt in any way that 16 are enough I
would use
at least 24.

If you really care for high security better use Serpent, because
Blowfish has
only 64 bit blocks.

------------------------------

From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Rijndael question
Date: Fri, 10 Nov 2000 09:00:08 GMT

David Hopwood wrote:
> Alternatively you could use CFB mode, for example (see Applied Cryptography,
> 2nd edition or Handbook of Applied Cryptography), in which case only the
> encryption direction of the block cipher is needed.

Better yet, use CTR mode, which will almost certainly be made a standard
mode.  See

http://csrc.nist.gov/encryption/aes/modes/lipmaa-ctr.pdf

If you're designing your own protocol, of course, there are *many* other
security issues to consider, like message integrity.
-- 
  __
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/

------------------------------

Date: Fri, 10 Nov 2000 09:18:50 +0000
From: Richard Heathfield <[EMAIL PROTECTED]>
Subject: Re: Announcement: One Time Pad Encryption - 0.9.3 - freeware

Tom St Denis wrote:
> 
> In article <[EMAIL PROTECTED]>,
>   Richard Heathfield <[EMAIL PROTECTED]> wrote:
> > Larry Kilgallen wrote:
> > >
> > > In article <8ue3fo$iun$[EMAIL PROTECTED]>, Tom St Denis
> <[EMAIL PROTECTED]> writes:
> > >
> > > > I would bet "secret agent behind enemy lines" would rather carry a
> > > > smart card with the cipher+128 bit key embedded in it then a
> computer +
> > > > MASS storage device for the OT pad...
> >
> > Smart cards can be stolen, so this is insufficient security (although
> it
> > might be possible to add extra safeguards to make this idea work).
> 
> I would argue that your stealing of a card in my pocket is a bit less
> trivial then a remote online attack.  So I doubt that's a serious
> threat with smart cards.

Sorry, I didn't make it clear enough that I was talking specifically
about the "secret agent behind enemy lines" scenario. Are you sure
you're reading enough thriller novels? ;-)

<snip>
> >
> > A One Time Tape, however, might have its advantages. High storage
> > capacity, reasonably fast delivery of bits, and you could have the
> tape
> > marked with little yellow strips* across it, so that you could cut the
> > tape at a marked boundary, discard (sorry! DESTROY...) the used part,
> > re-splice the tape, and you're ready for the next message.
> >
> > My understanding is that the original implementation of the One Time
> Pad
> > algorithm did in fact use a tape, but I could be wrong about that.
> 
> The problem with OTP is that how do we store the tape securely in the
> first place.  By your argument I could have just copied (instead of
> stolen) the tape and use it to decrypt all your messages covertly.

If you can get a copy of the tape without my knowing, then you can use
it to decrypt all *future* messages, yes. But not messages that were
transmitted before the copy.

The same weakness exists for actual pieces-of-paper-glued-together One
Time Pads, of course.


-- 
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R Answers: http://users.powernet.co.uk/eton/kandr2/index.html

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Computations in a Galois Field
Date: Fri, 10 Nov 2000 10:54:46 +0100



Paul Crowley wrote:
> 
> Mok-Kong Shen wrote:
> > > GF(2)^m is the space of vectors of bits.  For example, Rijndael mostly
> > > treats byte value as representing values from GF(2^8), but the affine
> > > transformation in the S-box can (AFAIK) only be sensibly defined in
> > > GF(2)^8 - ie treating the byte simply as a vector of bits and doing a
> > > matrix multiply followed by a vector addition.
> >
> > The diffusion property of Rijndael's substitution is, I
> > suppose, mainly dependent on the 1/x transformation, which
> > is done in GF(2^8) and which was the object of my original
> > question. As noted by others in another previous thread,
> > the affine transformation seems to be able to be replaced
> > by similar ones without adverse effects. It would be
> > fine if someone would say something definite about these
> > points and give the corresponding explanations.
> 
> The purpose of the affine transformation is to make sure that the
> algebraic representation of the whole S-box is complex, to defeat
> interpolation attacks.  It has also been chosen so that the S-box has no
> fixed points (S(a) = a) and no "opposite fixed points" (S(a) = ~a).
> Section 7.2 of the Rijndael paper goes into this.

It seems, as I mentioned, that there are a number of
affine transformations that are just as good as the one
chosen in Rijndael. Could you please say something to 
that? Further I should appreciate to know whether there
are other transformations in GF(2^8) that are just as
good as x to 1/x.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness
Date: Fri, 10 Nov 2000 10:54:39 +0100



Matt Timmermans wrote:
> 
> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
> > But I wanted simply to say that, on the assumption
> > that keystrokes of monkeys are truly random, the writings
> > of humans, since these could eventually be reproduced by
> > monkeys, are not the 'exact' opposite of randomness (i.e.
> > 'totally' deterministic).
> 
> On the assumption that the keystrokes of monkeys are random, there is _no_
> finite sequence of letters that could not, eventually, be reproduced by
> monkeys.  So what is special about books?

Books (and most other digitalized stuffs today) are easily 
available and selected through ISBN etc. and generally cheap, 
while the productions of the near relatives of humans are not.
Note that in books there are hardly anything like structures 
that can be concisely described by mathematics and thus
be exploited via mathematical means. The creativity of humans 
stems from the complex network of the brain in which the 
neurons work according to some physical laws. So one could 
say that in the end the source of randomness concerned here
comes from quantum unpredictability, I guess.

> 
> > Actually this fact is entirely trivial. One buys books because
> > one wants to get something whose contents one doesn't
> > know for sure.
> 
> This leads to a more interesting argument -- the randomness and entropy of
> finite sequences is completely subjective.

For crypto applications, I am of the personal oppinion 
that randomness is at least relative to the opponent's 
capability. If one, for example, shuffles an ordered 
sequence such that the opponent, within his specific 
capability, can't find any pattern in it to exploit, then 
that sequence is random, as far as he is concerned. Of 
course, estimating the opponent's capability cannot be 
entirely objective and is often very subjective.

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to