Cryptography-Digest Digest #126, Volume #13       Wed, 8 Nov 00 20:13:01 EST

Contents:
  Re: Regarding assymetric encryption algorithms (Sundial Services)
  Re: Hardware RNGs (Steve Portly)
  Re: Hardware RNGs (David Schwartz)
  Re: Hardware RNGs (David Schwartz)
  Re: Regarding assymetric encryption algorithms ([EMAIL PROTECTED])

----------------------------------------------------------------------------

Date: Wed, 08 Nov 2000 17:33:13 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Regarding assymetric encryption algorithms

You should plan to use a well-known algorithm, such as Rijndael, DES,
Twofish, or something like that -- with a well-understood cryptographic
system that you can get in source-code form.  Any of these algorithms
should produce the level of protection that you seek.

The time required to encrypt the data will not be a factor... not with a
500mHz Pentium!  "Piling on" algorithm after algorithm, obtusity after
obscurity, won't add any more practical security; it will simply
increase the number of keys that your program is having to manage and is
therefore likely to mis-manage.

The most important factors in determining the actual security of your
data are "human" factors.  Key-management, selection of keys, handling
and storage of keys, are all far more important than the algorithm you
select... given that there are so many excellent algorithms out there to
be selected.

Almost no one breaks -through- a cryptosystem anyway.  They break
-around- it.



> I am not very familiar with encryption beyond a basic understanding of
> methods, algorithms, and terminology.  I am seeking a solid assymetric
> encryption algorithm for the purpose of encoding small amounts of
> information ( generally less than 512, but sometimes up to 1024 octets ).
> The only necessity is a VERY high level of security.  An established
> algorithm that has been attacked many times and has left the data
> sufficiently encrypted.  Least important is speed, beyond that of a
> reasonable level.  By reasonable, 15 minutes max time allowed for encrypting
> 512 octets by means of around 500MHz Pentium class computer is desirable.
> Of course, I understand that it's difficult to come up with something that
> is so specific, but as I said, that is the least important criteria.  Ease
> of implementation would also be desirable, but hey, I know you can't have
> everything!
> 
> Thanks for taking the time to read and answer this question.
> [B.R]

------------------------------

From: Steve Portly <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Wed, 08 Nov 2000 19:23:27 -0500



David Hopwood wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
>
> Tim Tyler wrote:
> > David Hopwood <[EMAIL PROTECTED]> wrote:
> > : [EMAIL PROTECTED] wrote:
> > :> This reveals non-perfect entropy [...]
> >
> > : Perfect entropy was not claimed.
> >
> > AR:"Hashing does not increase entropy, whether one pass or multiple."
> >
> > PC:"No, of course not.  However, at least it doesn't *reduce* entropy."
>
> That's a misquote of what Paul Crowley wrote.
>
> Paul Crowley:
> > Alan Rouse:
> > > Hashing does not increase entropy, whether one pass or multiple.
> >
> > No, of course not.  However, at least it doesn't *reduce* entropy
> [context restored]
> > until you already have enough for your state to be unguessable,
> > unlike many preprocessing techniques suggested for entropy sources.
> > It's also much harder to get wrong. [...]
>
> Note "until you already have enough for your state to be unguessable."
> ~159 bits is unguessable.
>
> - --
> David Hopwood <[EMAIL PROTECTED]>
>
> Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/
> RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
> Nothing in this message is intended to be legally binding. If I revoke a
> public key but refuse to specify why, it is because the private key has been
> seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip
>
> -----BEGIN PGP SIGNATURE-----
> Version: 2.6.3i
> Charset: noconv
>
> iQEVAwUBOgi8gzkCAxeYt5gVAQGWyQgAjHrn60E5Yey5W/v6jACdJRS1cMsx7a2r
> gA32Ydm/IVBz5YpG1H7SSMF8ZJk3ZuKqz91sIia2pAn/vswuXaaxVBqQaFKHXBXF
> GE+ecR0frstpszY0NyV32wa0nbZ+AqPXj2oCODKVCsYrQD4gbsUuWbwC5AfGyLk5
> a74jHwkEQitl4ZszVg2CtG8SLEgYcQqUiv8OX51O/Vj2RdM0O9s6Uezg64GdUpML
> 6XU9LZjERfIrSEjQB77517lF/to+dTxl4YQJnjq/epjLAZt7TffP59VuV6Q9S+uZ
> WvHYwE8mJRwrJgECo3eAz2I3nZ77vaUgy7K1NJhqD7oQyqvven06EQ==
> =Liim
> -----END PGP SIGNATURE-----

The Pentium time stamp takes up so little system overhead that you can repeat it
many times without bogging down your system.  I found that it only takes about 40
microseconds to produce 1 bit of satisfactory entropy. (your mileage may vary).
I used a crude home grown frequency analyzer to check to see where the
distribution went flat.  My sample size was a gig in size and every way I tried
to slice it I came up with an even distribution better than the reported results
of hardware cards.  If you are a programmer it is definitely worth experimenting
with.



------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Wed, 08 Nov 2000 16:41:21 -0800


[EMAIL PROTECTED] wrote:
> 
> > > Ummm, no. That will apparently happen within a single program, but
> if
> > > you would try an small experiment, and run a small program that
> reads
> > > from and writes to the hard drive continually, and a seperate
> > > interactive program, you will see that this is simply not the case.
> >
> > Nonsense, it _is_ the case.
> 
> Then obviously we're using different caliber of hardware. I am using a
> modern setup with DMA controllers. That means quite simply that the
> processor can go and do it's own thing while the, in this case, hard
> disk controller reads from memory without affecting the speed of the
> CPU. The disk will not affect the speed of the program executing.

        I guess you didn't read my reply. When the CPU accesses the data that
was read, the caches will get blown out. This will cause the measured
code to run slower, causing the TSC to be different.
 
> >
> > > The
> > > program that reads/writes will be forced to wait for the hard drive
> but
> > > only because the operating system offers a syncronous view of the
> > > system, the second application will not see the system the same way,
> > > the second program will see a system without the disk delay.
> >
> > Nonsense again. The other program will sometimes be interrupted for
> > disk I/O and sometimes not be, depending upon exactly when the disk
> I/O
> > completes. When the disk I/O completes, not only will the TSC be
> > advanced by the time to do the I/O but the code will run slower due to
> > the caches being blown out by the disk I/O code.
> 
> They will only be interrupted when an active I/O write occurs, and that
> is just the writing to memory for the DMA controller to take care of
> the issue. So when does the disk controller ever deal with the cpu? for
> a few bus cycles, when the cpu sends a signal to the controller with
> information on what to transfer where.

        All it takes is one cycle to flip the TSC bit.
 
> > > What you
> > > are perceiving as CPU delay is only an artifact of performing the
> > > measurements on a synchronous appearing system, using the same
> process.
> > > If you seperate them, you will notice that the read/write overhead
> is
> > > very, very small, and will not give you anything even remotely like
> > > good randomness.
> >
> > *sigh* Again, you make no sense. We aren't after "good" randomness. We
> > are just after sufficient randomness. A thousand bits where one in a
> > hundred of them is a '1' is a good entropy source if properly fed
> > through a cryptographically strong hash function.
> 
> No, it is a source of distillable randomness, for which a perfect hash
> function will work very well. A cryptographic hash function does not
> necessarily have this behavior, a cryptographic hash functions main
> property is that given the hash, you cannot determine a value that
> generates that hash. A cryptographic hash also leaks, compared to a
> perfect distillation function, in that as was also pointed out in this
> thread, a 160-bit hash function does not offer a full 160-bits of
> distillation room, it offers some lesser space.

        Any algorithm that converts greater than 160-bits to 160-bits will lose
some entropy. There is no such thing as a "perfect distillation
function". Any function that maps a greater number of bits to a lesser
number of bits will have to have some cases where two different inputs
produce the same output.
 
> > Right, this will give you some randomness for precisely the reason I
> > said it would, which is that some things a computer does are
> > unpredictable, such as disk I/o.
> 
> Disk I/O is one of the most predictable. For the disk I/O you are
> taking a few megabits/second measuring the time it takes to 1/66.667
> millionth of a second, and you are calling it random, after that has
> again been sampled at 100 MHz, (again at 200 MHz) and again at 500 MHz.
> Remembering that these are all within tolerances of the ideal value,
> you will get apparently random information, but no more so that having
> a user tap a button, that button signal is sampled, the sample sampled
> and the sample sample sampled, and compared in timing to the original.

        I don't understand what you're saying here. Are you saying it's random
or its not?

> In fact it's the same thing, you are claiming that there is randomness
> involved here, I am telling you that where there is randomness it is
> not from when the user presses the button, it is from the oscillators
> in the middle.

        I agree. Nevertheless, there is randomness.
 
> > Did you need read the thread? Or not understand it? Or what? Of
> > _course_ it fails DIEHARD, it's not unbiased. That doesn't mean it's
> not
> > random!
> 
> Perhaps you need to read more carefully, the values I tested were
> distilled over the course of 3 days, not the few moments you seem to
> believe they were. This was not a test to see if the quick result was
> random, this was a test to see if there was enough randomness involved
> at all to do much of anything. Since I distilled over 3 days, and still
> repeatedly failed DIEHARD, it's a safe bet that this is far too slow to
> even count as marginal entropy.

        Define "distilled". When you say you "failed DIEHARD", do you have an
estimate for how much entropy you believe is in the data? For example,
have you developed a compression algorithm that can reproduce all the
raw data from fewer than, say, 1Kb of input?
 
> > A precision of 66Mhz would allow one-66th of a part per million to be
> > detectable in a second.
> 
> And comparing that to your "billionth of a second" that's a not very
> useful number is it?

        It's perfectly serviceable.
 
> > > Most use PCI, that is either
> > > 33 or 66 MHz, then it will hit the system bus, where it will be
> either
> > > 66, 100, 133, or 150 MHZ, and will likely be forced to the common
> > > granularity. After this it gets put in the CPU cache (again with
> > > granularity loss), then it is read by the CPU (again with
> granularity
> > > loss, this time by forcing it to match the CPU clock).
> >
> > Please explain to me how measuring something clocked at 66Mhz by a
> > 500Mhz clocks results in a granluarity loss.

> Because it is 66.667 MHZ, being sampled by 500.000 MHZ, they will very
> rarely coincide, making the sampling rate artificially drop.

        How can I put this bluntly -- that is just total bunk.

> We don't
> notice it normally because the synchronicity that forcibly becomes
> involved at taking 10MHz, 100MHz, or 1GHz, forcing it to 66.667 MHz,
> and referring that to 150.000 MHz, sampling that at 500 MHz, is very
> destructive for the exact timing. Going from 10 to 66.667 MHZ means
> that only 1/66667 clocks will coincide, all others must be delayed.

        Yes, and the amount of the delay will be unpredictable but bounded.
Since the bound of the delay is less than one clock at the source, the
delay does not cause us to lose information.

> Going to the next level, leaving the PCI bus, only 1/150000 will
> coincide, all others must wait. Sampling that again at 500 MHz, 4/5
> must wait, again losing granularity on your measurements. I haven't
> even begun to count the cache delays, the resistance of the wires,
> clock variance, etc, which will make it increasingly difficult to get
> an accurate measurement. Now, how exact did you want to make this
> measurement?

        You are not making any sense. I can't think of any good way to make it
clear how patently absurd this is. Perhaps someone else can do better.

        Let me try it one way that should help from a common sense standpoint.
You want to get the best possible measure of how much steak you have, to
an accuracy of one pound. Which will give you a more accurate reading, a
digital scale that reads in integer pounds or a digital scale that reads
in integer grams?

> > > If the system
> > > has an EV6 bus (or another advanced bus protocol) you can add at
> least
> > > one more granularity killer into that, all before you can measure
> it.
> >
> > These are not granularity killers at all. Going from a slower clock to
> > a bus at a faster clock loses nothing.
> 
> Only if they are perfect multiples, going from 200 MHz, to 950 MHz, you
> will lose because some signals will be delayed.

        No, you will lose nothing. The delay of the signals is always much less
than the resolution of the original reading, so there is no loss.
 
> > > Computers are designed to eliminate all the randomness in their
> > > behavior that is possible, pretending it is another way does not
> > > change
> > > anything.
> >
> > That's a nonsensical claim.
 
> So it doesn't make sense to you that designing something that will fail
> randomly, randomly read the wrong location (or right one for that
> matter), generate random timing is better than designing something
> where the exact behavior at each instant is known? Somehow I hope they
> don't think like you, otherwise all the answers that modern chips give
> on arithmetic would make FDIV look miniscule.

        *sigh* You are just being ridiculous. Random behavior is not the same
as random results. Computers are generally programmed so that the
randomness in their behavior only affects the results where this is
desired. But believe me, I can easily write programs whose behavior you
could never predict. It makes no sense to force any more order than you
actually need.

        Since nobody is expected to care the precise clock cycle in which the
reception of a network packet generates an interrupt, no effort is made
to make this predictable.

        If what you say is true, the oscillators that generate our busses
should be oven compensated. But they're not. Why do you think that is?
Hint: The increase in predictability of operation doesn't justify the
cost.
 
> > Actually, the reverse seems to be true. The more sophisticated drives
> > are more likely to have independently clocked agents, such as a cache
> > management processor. Cheap IDE drives, however, seem to be fully
> > synchronous to the bus clock. Nevertheless, there's entropy in the
> > variation in the rotational speed of the disk as measured by that
> > clock.
> > However, how much entropy is there seems to be in dispute.
> 
> Again the claim that the better/more expensive it is, the worse it's
> made?

        Huh? I don't understand how you drew that conclusion. Are you disputing
my claim that cheaper IDE drives are less likely to have independent
clocks that more expensive drives? Have you looked at a handful of
drives?

> > > Unfortunately taking the clock drift
> > > between the HD cycle and the system cycle won't generate entropy, it
> > > will only express the entropy of another influence, most likely
> > > temperature, which is now very carefully controlled on virtually all
> > > systems. That is not to say that there isn't any entropy available
> > > there, just that you'll have to work very hard to get enough of it
> > > to
> > > be useful.
> >
> > The temperature is not carefully controlled enough to restrict
> > oscillator drift. It still drifts by a few parts per billion, which is
> > enough to be measured. If you can take active steps to measure it, you
> > can grab about 4 good bits of entropy per second.
> 
> You can only measure it with the assuredness of the closest clock
> split, and in modern systems there are a fairly large number of them.
> You may be able to gather entropy using timings in these systems, but
> the amount of it is going to be astronomically small.

        When you have two independent clocks, as you do with the bus clock and
the network card clock, the amount of entropy between their rising and
falling edges is massive. There is a legitimate question about how much
of this entropy can actually gather.

        Even if the two clock frequencies are constant, but real values, you
gain more and more information about these real values (and hence more
entropy) the more samples you take. By the time you've mined all the
entropy in those two real values (and successive samples become more and
more predictable) they've drifted due to temperature, giving you two new
real values whose ratio contains new entropy.

        DS

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Wed, 08 Nov 2000 16:48:01 -0800


Steve Portly wrote:

> The Pentium time stamp takes up so little system overhead that you can repeat it
> many times without bogging down your system.  I found that it only takes about 40
> microseconds to produce 1 bit of satisfactory entropy. (your mileage may vary).
> I used a crude home grown frequency analyzer to check to see where the
> distribution went flat.  My sample size was a gig in size and every way I tried
> to slice it I came up with an even distribution better than the reported results
> of hardware cards.  If you are a programmer it is definitely worth experimenting
> with.

        The problem is, unless you can come up with a satisfactory theoretical
explanation of the source of this randomness, all you can say is that
they're apparently random and that you have one bit of apparent entropy.
It may be that the data appears random, but is actually predictable
based upon sufficient knowledge of the initial conditions (what's in the
caches) and the set of rules that govern their behavior (wait states,
frequency ratios, cache sizes and associativity, and so on).

        More entropy cannot come out than is going in. So it's important to
identify the sources coming in and quantify how much entropy they're
providing. Even that's not enough, because there's the question of how
much less entropy came out than went in.

        However, from a practical standpoint, there appears to be massive
amounts of entropy in this data. Far greater than can be theoretically
justified. And there doesn't seem to be any way to predict it. However,
that may just mean that an attacker needs to be more sophisticated. See,
for example, how relatively incompressible the output of
http://youknow.youwant.to/~djls/entropy.c seems to be.

        DS

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Regarding assymetric encryption algorithms
Date: Thu, 09 Nov 2000 00:51:57 GMT

Well I think we can help you out on that. Firstly, though are you sure
you can tolerate that long (15 minutes!!!!!) for encryption?

If you can then the solution is very simple. Use RSA, it's been
continually attacked for a couple of decades now, without any results
at the level I'm suggesting. What I'm suggesting is this:
This needs to be done once, at most once a year
{
p,q,e different primes of 513 BYTES
compute d from de = 1 mod (p-1)(q-1)
}
For each message, pad the message to 1024 bytes using fairly random
digits, with this monster you're not going to have to be concerned
about attacks on the system. The result is a message M of 1024 bytes.
Compute:
X=M^e mod pq
This is very time consuming.
For decryption
M=X^d mod pq
again very time consuming.

I will come right out and state very flatly, that I would expect this
arrangement to be impervious to everyone, everything, and any
combination of massive money, and astounding brilliance for a very long
time. Currently the record for factoring (what we believe would be
required to break RSA of this side) is 64 bytes. This would be enormous
orders of magnitude stronger, and would be equivalent to 200+ bit
perfect symmetric encryption. The current believed secure is 90 bits,
so it would be 2^100+ times as strong.

If this isn't secure enough for you you could go with very large ECC
fields for encryption, but I'm not as familiar with the strength
supplied by this (comments welcome), I can however say that currently
ECC is easily secure at 200 bits, and you'd again be looking at 8192,
making it secure effectively for eternity. It would however be
significantly slower.

If you can't have enormous encryption/decryption times, there are other
options, that are likely to be as secure. Just to give you an estimate
of the times you're looking at, encryption/decryption takes less than 1
second at 4096-bits, you'd probably be looking at under 5 seconds for
encryption/decryption under RSA.
                  Joe


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to