Cryptography-Digest Digest #874, Volume #10       Sun, 9 Jan 00 23:13:01 EST

Contents:
  Re: Intel 810 chipset Random Number Generator (Bradley Yearwood)
  Re: compression & encryption (Kenneth Almquist)
  RSA Math Software ("Bret Johnson")
  Re: How about this for a "randomly" generated bitstream? (David Kessner)
  CodeCracker game ("Zuldare")
  Re: RSA Math Software ("Jeff Moser")
  Re: Intel 810 chipset Random Number Generator (Vernon Schryver)
  DH parameters ("JH")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Bradley Yearwood)
Subject: Re: Intel 810 chipset Random Number Generator
Date: 9 Jan 2000 15:08:47 -0800

In article <859sqo$5go$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
>
>Apparently there is no reliable way of detecting the RNG.
>You read 1 bit from a pre-defined memory location to see
>it is there, and read from other locations to get the random
>data. You are on your own to figure out whether the data is
>really random enough that it is likely to be coming from
>the RNG.

Yes, but it is common sense to run some statistical tests on any
claimed hardware random source before actually using it, and perhaps
also concurrently with use.  An advantage of having this source integrated
onto a chip is that it seems less likely to fail than the components of and
connections to some external device.

>It warns about multi-threaded apps using the RNG, but
>nothing about different apps using the RNG. Apparently
>bad things can happen if 2 apps try to use it at once.
>
>This looks pretty brain-damaged to me.

Not really brain damaged.  This problem is intrinsic to any concurrent
single-server multiple-consumers situation.  You need an interlock and/or
queueing to serialize access.

The hardware guys did OK in describing how to get to the raw registers.  Their
warning against simplistic multi-threaded use was unnecessary to anyone
familiar with realtime software or OSs.  These days, so much seems driven
by avoidance of potential liability, and so little by common sense, that the
presence of the warning is understandable.


------------------------------

From: Kenneth Almquist <[EMAIL PROTECTED]>
Subject: Re: compression & encryption
Date: 10 Jan 2000 00:51:38 GMT

Kenneth Almquist <[EMAIL PROTECTED]> wrote:
> Tim Tyler <[EMAIL PROTECTED]> wrote:
>: Compressing using a nonbijective algorithm and then encrypting twice
>: may be faster than compressing using a bijective algorithm and then
>: compressing once.
>
> Multiple encryption with independent algorithms and bijective
> compression produce different types of security benefit.

The security benefit in both cases is that the attacker has less
information about the input to the encryption algorithm.  (In the
case of double encryption, I am referring to the input to the
second encryption.)

>: That is because some of the compression algorithms
>: which provide the best combination of speed and compression ratio
>: (such as the one used by gzip) are not bijective.
>
> Bijective compression has only just been invented.  The current
> situation /should/ eventually reverse - since making a compression
> system bijective demonstrably makes optimal use of the range of the
> compressor.

The algorithms with the best compression ratios are non-bijective only
because of redundancies between the contents of the compressor output
and the length of the compressor output.  Since the length of the
compressor output can be represented in log_2(N) bits, this redundancy
wastes at *most* log_2(N) bits.

>: I believe that except in special circumstances, it is a mistake to
>: assume that an opponent will not be able to mount a known plaintext
>: attack. [...]
>
> Well yes - when analysing security, consider worst-case scenarios.
>
>: For this reason, I believe that the apparent improvement
>: in security offered by bijective compression is an illusion.

> There is no sign of logical argument, though.  Compression reduces
> the effectiveness of partial known plaintexts.  It makes little
> difference to *complete* known plaintexts.  However, the former
> are very common.  Defending against them is likely to be genuinely
> worthwhile, if you do not know for certain what attackers can do
> with any known plaintext you give them.

This is true of all types of compression.  I was addressing the
question of whether bijective compression offers more security than
non-bijective compression.

>: If your encryption algorithm is vulnerable to a known plaintext
>: attack, then the chances are that a determined attacker will obtain
>: a known plaintext an break your cipher, so it is not likely to matter
>: whether you use bijective encryption.
>
> A known plaintext breaks a message key, not a cypher.  If the attacker
> faces a subsequent message, with a partial known plaintext, certain 
> types of compression can make a *big* difference to the attacker's
> ability to exploit this.

Here you appear to be assuming that each message is encrypted with a
different key, which raises the question of how the message keys are
distributed.  If the key distribution channel is protected only
through encryption, then an attacker who breaks a single message key
is in a position to mount a known plaintext attack against the key
used to encrypt the message keys.  But if different encryption
algorithms are used for message encryption and key distribution, the
use of separate algorithms provides some redundancy against attack.

To return to your point:  I agree that the compression algorithm
does make a difference.  I assume by partial plaintext you mean that
the attacker knows a section of the plaintext, but not what comes
before or after it.  Scott's web site describes a bijective compression
algorithm which uses a predefined mapping of strings to symbols.  If
this is the compression algorithm used, then the attacker with a
known plaintext attack the encryption algorithm generates a list of
possible compressions of the known section of plaintext.  There will
be several possibilities because the first several bytes of the
known plaintext might not be included.  (The symbol which encodes
the last few characters of the text preceding the known plaintext
might also encode a few characters of the known plaintext.)  Now the
attacker needs to perform multiple known plaintext attacks, both
because he does not know exactly where in the cyphertext the encrypted
second of known plaintext occurs, and because the known portion of
the plaintext could result in several different outputs from the
compressor.

Switching to a different compression algorithm--an adaptive algorithm
which alters its compression tables based on previous input--would
make this type of attack much harder.  Another approach is to run a
mixing pass after compression.  Take the Square encryption algorithm
and run it in CFB mode over the input, using an IV of zero.  This
ensures that every bit that precedes the corresponding bit of the
input.  Since Square operates on 128 bit blocks, it transforms the
section of known plaintext in one of 2^128 ways.  This makes the
attack described in the preceding paragraph useless, even if the
attacker is some how able to determine the encryption key used with
Square.

I use Square in the above construct because it's fast compared to
compression algorithms or compared to more heavily analyzed encryption
algorithms such as triple-DES or IDEA.  So for a relatively small
increase in processing time, you can get diffusion via a published
algorithm which has been studied by professional cryptographers,
rather than relying on the diffusion provided by a compression algorithm
whose cryptographic properties were probably not even investigated by
the original designers.

Note that whether or not the compression algorithm is bijective or
not is irrelevant to this part of the discussion.

>: Sure, it's *possible* for bijective compression to stop an attack on
>: a cryptosystem, just as it's possible that prefixing the encrypted text
>: with an obscene message will stop an attacker with delicate sensibilities.
>: My point is that "possible" is not the same as "probable."
>
> Unfortunately, you present no clear argument about the relative costs.
>
> Yes, you should consider the expenditure on relevant components of
> your system before deploying it.  Compression is a good idea before
> encryption for so many reasons, that it is quite likely you will be
> employing it in some form anyway - the bijective compression enthusiasts
> ask only that it be done properly.

I mentioned the algorithm used by gzip, which (at least a few years
ago when I last looked at compression algorithms) was very attractive
in terms of throughput vs. compression ratio.  It is not bijective,
and is not readily modified to make it bijective.  If you eliminate it
from consideration for that reason, then you are left with alternatives
which compress less effectively, are significantly slower, or both.

Now consider a user who is short on disk space and compresses his
files using (what else?) gzip, and later transfers one of these files
to another system.  The file transfer program may use bijective
compression, but the file has already been compressed by a non-
bijective compression algorithm, introducing information into the file
that could help an attacker.  You could address this problem by making
the file transfer program check for gzip-ed files and undoing the gzip
compression before transmission, but some of the questions that a
developer would have to consider are:

 1)  If we uncompress the entire file and store it to disk before
     transmission, what happens if the user doesn't have enough disk
     space to store the uncompressed file?

 2)  If, to address the above question, we run the gunzip program
     in parallel with the file transfer, does this introduce a covert
     timing channel?  (The reason that the user has to run a file
     transfer program in the first place, rather than just using
     a distributed file system to access his files whereever they
     may be, is that he wants to be able to run untrusted programs
     such as gzip, which might leak information through the timing
     of their file accesses.)

 3)  The receiving system has to know whether or not to perform gzip
     compression on the received file.  A single bit would suffice
     to indicate this, but more likely we would allocate a byte in
     order to allow additional compression programs to be supported
     in the future.  Might this byte actually help an attacker more
     than the information added to the file by gzip?

 4)  What happens if the user transfers a data file which by chance
     begins with the same magic number as gzip files, even though it
     the file was not produced by gzip?

This should at least begin to point to the costs involved in ensuring
that compression is "done properly" in a system when the system
designer doesn't have control over the data to be transmitted.

> Designers of cryptosystems designed to ensure privacy of mail
> messages should cover all possible approactes to the best of
> thair ability.
>
> Neglecting something as fundamental as depriving cryptanalysts of
> their raw material in the form of regularities in the plaintext by
> using good compression would probably be verging on the severely
> negligent.

Compression deprives the attacker of regularities in the plaintext in
two ways.  First, it eliminates some of the regularities.  That's
"some," not "all."  As I recall, the entropy of Engish text has been
estimated to be somewhere between two and three bits per character.
Standard compression algorithms miss a lot of this regularity.
Furthermore, even if you were to invent a compression algorithm that
actually eliminated all of the regularities from English text, you
would then face the problem that the system's users might decide to
start sending each other MIME messages or HTML messages, and the
compressed messages would again contain regularities.  In short, in a
system where users are allowed to encrypt whatever they want to, you
cannot eliminate them from the output of the compression algorithm, or
even make any guarantees about the quantity of such regularities.

Second, compression makes the remaining regularities difficult to use.
This can be observed, for example, by compressing a file twice, first
with a relatively poor compression algorithm and then with a better
one.  The first compression algorithm will probably obscure the
regularities that it doesn't remove, so that the second compression
algorithm won't be able to find them.

The first of these strengthens security, but the question is how much.
This is hard to answer because we don't know how much regularity is in
the original input, how much regularity the compression will remove,
or how much regularity the attacker actually needs to mount a
successful attack.

Whether the second increases security depends on whether the remaining
regularities are actually obscured effectively, and I think it is hard
to be confident that that is the case because compression algorithms
are not designed to be cryptographicly secure, and have not been
subject to the sort of scrutiny by crytographers that published cyphers
have.  Furthermore, it is hard to even specify what it means for a
compression algorithm to obscure regularities in a cryptographicly
secure manner, since any compression can be efficiently reversed.

In contrast, block encryption algorithms are expressly designed to
(among other things) obscure regularity.  Thus I am much more willing
to trust a suitably chosen block encryption algorithm to obscure all
regularities, than I am to willing to count on a compression algorithm
to either remove enough regularity or to obscure the regularity that
it does not remove.

Sorry I couldn't seem to make this more concise.
                                Kenneth Almquist

------------------------------

From: "Bret Johnson" <[EMAIL PROTECTED]>
Subject: RSA Math Software
Date: Mon, 10 Jan 2000 01:14:25 GMT

Hello, I ma new to this group and to the subject.
After reading Simon Singh's _The Code Book_ I was trying to setup MS Excel
to use his description of how RSA works.  I have found excel does not have
all the functions I need to play with this.

Could you recommend a cheap or free software that would do this?

Thanks.





------------------------------

From: David Kessner <[EMAIL PROTECTED]>
Subject: Re: How about this for a "randomly" generated bitstream?
Date: Fri, 07 Jan 2000 16:43:53 -0700
Reply-To: [EMAIL PROTECTED]

> In article <853ocm$[EMAIL PROTECTED]>, Guy Macon wrote:
> >One problem:  an audio recording that is accurate to the LSB sounds
> >considerably worse than one with 1 LSB of random noise added.  Because
> >of this, it is common to add 1 LSB of random noise.  In the old days
> >this was usually resistor noise, but nowdays it is chaeper to write a
> >psuedorandom generator program on the DSP chip in your mixing board.
> >I doubt that the psuedorandom generator programs are at all suitable
> >for crypto.  (At last!  My membership in the Audio Engineering Society
> >finally paid off!)

Ummm...  I assume that you are talking about audio dithering.  If
this is the case, then there should be a little clarification for
those non-audio people here.

You don't add 1 LSB of random noise, you add "a small amount of
noise that is <1 LSB".   Further, this is only done when taking
audio from N bits down to N-M bits (I.E., from 24 bit audio to
20 bit audio).  And in this case, the LSB is the last bit in the
N-M bit data, not the original N bit data.

To put this into a practical sense, let's say that we're 
converting 24 bit audio into 16 bit audio.  And, for the
sake of discussion I will ignore what to do with the sign
bit(s).

When converting from 24 to 16 bit audio, we take the original
24 bit sample and add an 8-bit random number.  Remember that
the "LSB" is the 16th bit (from the left), so our 8-bit random
number is <1 LSB.  This produces our new sample.  This is
repeated for every sample, but with a different random number
(obviously).

This has the effect of masking the quantanization noise.  Under
some conditions the quantanization noise would sound very non-
random.  But with this method of dithering the quanization noise
is basically white noise and thus less objectionable to our
ears.

This has a very different effect than "taking a 16 bit audio
stream and adding a random 1-bit stream to it".  In that the
effect is much more subtle and much less noisy.

The random numbers used in this process need to be very
random.  The human ear has an uncanny ability to pick out
bad noise sources.  Typically, multiple LFSR PRNG's are used
with bit-lengths of >64 bits.  Each PRNG will generate one bit
of that 8-bit number.  

The resulting 8-bit number is then  sometimes ran through a 
digital filter that cuts out some of the frequency content 
around 1 KHz-- where the ear is the most sensitive-- and
boosts the energy at the high and low frequencies where the
ear is the least sensitive.  There are professional audio 
products on the market that use a single DSP for only generating
these random numbers!

I should stress that this is only done when converting from one
word size to a lower word size.  It is not done when going to a
higher number of bits or when the word size stays the same.

What this means for the crypto guy is unclear.  All in all, I
would not use an audio source for crypto uses, but this is just
_MY_ personal bias.

David Kessner
[EMAIL PROTECTED]

------------------------------

From: "Zuldare" <[EMAIL PROTECTED]>
Subject: CodeCracker game
Date: Mon, 10 Jan 2000 03:24:41 GMT

            Forgive the non technical post, but  I am looking for a simple
windows game that is couple years old, called CodeCracker. It is freeware, a
color code game kinda like Mastermind.
                I have went to dozens (truely dozens and dozens) of
"download" sites and get the "This page cannot be displayed" error from
everywhere. Does anyone have this game or know a valid download site???
thanks (I do not need the PalmPilot version)



Zuldare





------------------------------

From: "Jeff Moser" <[EMAIL PROTECTED]>
Subject: Re: RSA Math Software
Date: Sun, 9 Jan 2000 22:30:43 -0500

> Hello, I ma new to this group and to the subject.
> After reading Simon Singh's _The Code Book_ I was trying to setup MS Excel
> to use his description of how RSA works.  I have found excel does not have
> all the functions I need to play with this.
>
> Could you recommend a cheap or free software that would do this?
>
MuPAD (http://www.mupad.de) has all that you need and more. There is a free
version of it too

--

Jeff



------------------------------

From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: Intel 810 chipset Random Number Generator
Date: 9 Jan 2000 20:40:26 -0700

In article <[EMAIL PROTECTED]>,
Roger Schlafly <[EMAIL PROTECTED]> wrote:

>> As far as the hardware can tell, there is no consistent difference between
>> a "multi-threaded app" and an operating system running "2 apps."
>
>So why does the Intel manual suggest a way to use the RNG in
>a multi-threaded app, but not give a clue about how to get it to work
>for 2 apps?

Why?--probably because the Intel technical writers know and understand
what I wrote, that as far as the hardware is concerned, there is no
relevant difference between "a multi-threaded app" and "2 apps", and that
the industry standard term for the situation is "multi-threaded" or perhaps
"multiprocessing," but not "2 apps."

Intel data sheets have for decades been written for people who don't
understand much about software.  The best way to communicate the issue to
such people is to use the simplest concepts that absolutely everyone should
understand.  With the popularity WIN32, more people understand concurrency
problems in terms of a single, multi-threaded process than in terms of
multiple processes as seen by an operating system.  Anyone smart enough
to deal with the operating system issues (i.e. "2 apps") immediately sees
what Intel is referring to.

In other words, carping about the differences between "2 apps" and
"multi-threaded app" is too smart by half, and suggests the critic himself
does not understand the issue.


By the way, someone else wrote that such concurrency hassles are inevitable
with a random number device.  That's wrong.  If the device could deliver
random bits as fast as they could be asked for by the CPU(s), then no
locks, semaphores, barriers, queues, mutexes, etc. would be needed.  Each
thread of execution (whether a process, kernel thread, user-mode thread,
or some other abstraction) could blithely grab random bits.   Of course,
delivering random bits as fast as input or memory read instructions can
ask for them is not a small "if."


>>The right
>> way to use this random chip is as one source of randomness for something
>> like /dev/random, which would not only fetch and distill randomness from
>> the chip and elsewhere, but mediate between uses of randomness, and not
>> only for user-mode applications but within the operating system, such as
>> for initial TCP sequence numbers.  (See RFC 1948).
>
>That is your opinion, but it does not appear to be what Intel had in mind.

How much would you like to bet that the Intel employees who designed the
facility expected that the operating system would mediate between the
application wanting random bits and the various concurrent threads of
execution that need random bits?  I'm confident that they understand (and
understood) the issue well enough, even if their main design target was
software from Redmond.  The odds are better than even that some of them
have heard of /dev/random and thought that their design should be used
within it.


Vernon Schryver    [EMAIL PROTECTED]

------------------------------

From: "JH" <[EMAIL PROTECTED]>
Subject: DH parameters
Date: Mon, 10 Jan 2000 04:04:56 GMT

I am trying to implement the Diffie-Hellman key exchange and have a
question about the size of the various numbers used.  Most of the documents
I have looked at give an example using very small numbers and then make a
general statement that for real security you would use much larger numbers,
but they don't specify the actual sizes of each of the values.  It looks
like you can keep a small generator such as 3 or 7 and I would imagine the
prime would be 1024 or 2048 bits for high security. But I havn't been able
to tell if your secret key must be about the same size as the prime or if
it only needs to be around 128 bits.  
Will the following values give me roughly the same strength as a 128 bit
key for Blowfish?
In "a = g^x mod P"
generator     : g = 7
public prime : size of  (P) = 256 bytes = 2048 bits
private key   : size of (x) = 16 bytes = 128 bits
or does x need to be 256 bytes also?
Are there any available tables showing the equivalent strengths of various
key sizes for DH vs. Blowfish, IDEA etc.
TIA, JH

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to