Cryptography-Digest Digest #880, Volume #10 Mon, 10 Jan 00 19:13:01 EST
Contents:
Re: "1:1 adaptive huffman compression" doesn't work (SCOTT19U.ZIP_GUY)
Re: Questions about message digest functions ([EMAIL PROTECTED])
Re: compression & encryption (SCOTT19U.ZIP_GUY)
Re: "1:1 adaptive huffman compression" doesn't work ("Douglas A. Gwyn")
Re: Is there a sci.crypt FAQ? ("Douglas A. Gwyn")
Re: AES & satellite example ("Trevor Jackson, III")
Re: Simple Encryption ... (Paul Koning)
Re: Intel 810 chipset Random Number Generator (Paul Koning)
Re: Intel 810 chipset Random Number Generator ("Trevor Jackson, III")
Re: Intel 810 chipset Random Number Generator ("Trevor Jackson, III")
Re: "1:1 adaptive huffman compression" doesn't work ("Gary")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 22:34:48 GMT
In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]>
wrote:
>Tim Tyler wrote:
>>
>
>> One idea behind such a scheme is essentially that if the EOF occurs while
>> the decompressor is in the middle of a symbol, it *knows* that this can
>> only happen if the decompressor chopped of a tail of zeros. This tail of
>> zeros can be unambiguously reconstructed *provided* the file does not end
>> with any all-zero Huffman symbols - and this case can be avoided fairly
>> simply.
>
>Excuse me, if I am arguing based on wrong knowledge (I haven't followed
>the stuff for quite a while and perhaps have forgotten a lot). What
>if the analyst decrypts with a wrong key which produces a file that
>has at the end a sufficiently long sequence of zeros?
>
>M. K. Shen
go ahead test h2com and h2unc with a file of zeroes
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
"The road to tyranny, we must never forget, begins with the destruction of the
truth."
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Questions about message digest functions
Date: Mon, 10 Jan 2000 22:23:46 GMT
Tim Tyler I wrote:
>
> [is building a secure, one-way pseudo-random permutation possible?]
> To quote from Section 18.12 of Schneier's "Applied
Cryptography":
>
> ``It is possible to use a public-key encryption algorithm in a block
> chaining mode as a one-way hash function. If you then throw away
the
> private key, breaking the hash would be as difficult as reading the
> message without the private key.''
>
> This construction appears to work, and when block
> size, hash size and message size are all equal, it
> provides a secure bijective one-way hash.
Now look at how the time to break it compares with
hashes based on PRF's.
--Bryan
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: compression & encryption
Date: Mon, 10 Jan 2000 22:53:30 GMT
In article <85baeq$h7o$[EMAIL PROTECTED]>, Kenneth Almquist <[EMAIL PROTECTED]>
wrote:
>Kenneth Almquist <[EMAIL PROTECTED]> wrote:
>> Tim Tyler <[EMAIL PROTECTED]> wrote:
>>: Compressing using a nonbijective algorithm and then encrypting twice
>>: may be faster than compressing using a bijective algorithm and then
>>: compressing once.
>>
>> Multiple encryption with independent algorithms and bijective
>> compression produce different types of security benefit.
>
>The security benefit in both cases is that the attacker has less
>information about the input to the encryption algorithm. (In the
>case of double encryption, I am referring to the input to the
>second encryption.)
>
>>: That is because some of the compression algorithms
>>: which provide the best combination of speed and compression ratio
>>: (such as the one used by gzip) are not bijective.
>>
>> Bijective compression has only just been invented. The current
>> situation /should/ eventually reverse - since making a compression
>> system bijective demonstrably makes optimal use of the range of the
>> compressor.
>
>The algorithms with the best compression ratios are non-bijective only
>because of redundancies between the contents of the compressor output
>and the length of the compressor output. Since the length of the
>compressor output can be represented in log_2(N) bits, this redundancy
>wastes at *most* log_2(N) bits.
No the above is not the reason since it is possible to map bijective from
bits streams that could come from a compressor to any mulitle of bytes.
take any finite pattesn of ones and zeros. Including the NULL pattern of
nothing then add a one bit to mark EOF since only zeros are after it. Then
you can map this types of strin bijective to any byte multiple type of file
you want.
>
>>: I believe that except in special circumstances, it is a mistake to
>>: assume that an opponent will not be able to mount a known plaintext
>>: attack. [...]
>>
>> Well yes - when analysing security, consider worst-case scenarios.
>>
>>: For this reason, I believe that the apparent improvement
>>: in security offered by bijective compression is an illusion.
>
>> There is no sign of logical argument, though. Compression reduces
>> the effectiveness of partial known plaintexts. It makes little
>> difference to *complete* known plaintexts. However, the former
>> are very common. Defending against them is likely to be genuinely
>> worthwhile, if you do not know for certain what attackers can do
>> with any known plaintext you give them.
>
>This is true of all types of compression. I was addressing the
>question of whether bijective compression offers more security than
>non-bijective compression.
The problem is that with nobijective compression you make what was
a know plain text attack into a cihertext only attack far worse. Since
the other kind of compressors add information.
>
>>: If your encryption algorithm is vulnerable to a known plaintext
>>: attack, then the chances are that a determined attacker will obtain
>>: a known plaintext an break your cipher, so it is not likely to matter
>>: whether you use bijective encryption.
>>
>> A known plaintext breaks a message key, not a cypher. If the attacker
>> faces a subsequent message, with a partial known plaintext, certain
>> types of compression can make a *big* difference to the attacker's
>> ability to exploit this.
>
>Here you appear to be assuming that each message is encrypted with a
>different key, which raises the question of how the message keys are
>distributed. If the key distribution channel is protected only
>through encryption, then an attacker who breaks a single message key
>is in a position to mount a known plaintext attack against the key
>used to encrypt the message keys. But if different encryption
>algorithms are used for message encryption and key distribution, the
>use of separate algorithms provides some redundancy against attack.
>
>To return to your point: I agree that the compression algorithm
>does make a difference. I assume by partial plaintext you mean that
>the attacker knows a section of the plaintext, but not what comes
>before or after it. Scott's web site describes a bijective compression
>algorithm which uses a predefined mapping of strings to symbols. If
NO i suggusest using the ADAPTIVE like h2com.exe before
compression. The only time static huffman is used for very spicail cases
when the tables are known in advance for converting to different bases.
But if you look at my conditional adaptive compression it basicly is used
fro when you know what the set of symbols are. But is still is adaptive.
>this is the compression algorithm used, then the attacker with a
>known plaintext attack the encryption algorithm generates a list of
>possible compressions of the known section of plaintext. There will
>be several possibilities because the first several bytes of the
>known plaintext might not be included. (The symbol which encodes
>the last few characters of the text preceding the known plaintext
>might also encode a few characters of the known plaintext.) Now the
>attacker needs to perform multiple known plaintext attacks, both
>because he does not know exactly where in the cyphertext the encrypted
>second of known plaintext occurs, and because the known portion of
>the plaintext could result in several different outputs from the
>compressor.
Wrong as I stated above I use the adaptive.
>
>Switching to a different compression algorithm--an adaptive algorithm
>which alters its compression tables based on previous input--would
>make this type of attack much harder. Another approach is to run a
...
Sorry I cut this but my connection keeps dyiing so had to chop it.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
"The road to tyranny, we must never forget, begins with the destruction of the
truth."
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 22:06:59 GMT
Gary wrote:
> I've always wondered if 100% 1-1 compression is impossible for all domains
> and ranges.
What does that mean?
It is a fact that any lossless encoding that compresses *some* data must
also expand some other data. Among other things, this implies that
D.Scott's one-on-one compression necessarily does not operate between
*fixed* finite domain and range, but only between "open-ended" infinite
sets. If we let |S| denote the maximum numer of bits needed to encode
members of the set S, then DS1-1:D->R, |R| > |D|; let R' = set of all
|R|-bit values, DS1-1(inverse):R'->D', |D'| >= |R'|, so |D'| > |D|;
i.e. the size of the space being
compressed/decompressed/recompressed/...
continually expands.
> Given that compressed data has two streams 1) non redundant data and 2)
> encoded redundant data.
What are you talking about? The output of a compression algorithm is
*all* encoded. Some encodings may embed uncompressed patterns in the
(distributed) dictionary, but other schemes don't.
> Take any random data that when looked at as compressed data has redundancy
> in the non redundant stream. Surely when this is decompressed and then
> recompressed it will be more compressed than the original.
> This implies that only by using redundancy as and indication of redundancy
> can 1-1 compression be achieved.
Nope.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Is there a sci.crypt FAQ?
Date: Mon, 10 Jan 2000 22:16:57 GMT
Volker Hetzer wrote:
> [EMAIL PROTECTED] wrote:
> > I'm looking for a fairly technical introduction to
> > hillclimbing/genetic algorithms useful in
> > cryptanalysis.
> You are unlikely to find anything there.
> Encryption algorithms are designed not to be solveable that way.
> (I.e. their solution space is supposed to consist of a plane
> with a single raised dot in it.)
Exactly right. Fancy search algorithms like simulated annealing,
genetic, neural net, dynamic programming, etc. rely on there being
a measure of "nearness" to the solution in parameter space that
can be used to guide the evolution of the process. If there is no
"gradient" of the terrain, there can be no direction for improvement.
------------------------------
Date: Mon, 10 Jan 2000 18:06:01 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: AES & satellite example
John Savard wrote:
> [EMAIL PROTECTED] (DJohn37050) wrote, in part:
>
> >I am writing a paper for 3rd AES and remember someone discussing in sci.crypt
> >that a reason for having multiple AES winners were situations where hardware
> >was used but was not able to be updated, as in satellites, Does anyone
> >remember who said that?
> >I want to give proper attribution.
>
> Well, there was a post in the thread "Re: 'Risks of Relying on
> Cryptography'" where Terry Ritter noted that a satellite should have
> multiple ciphers on board, so that if one was found to be vulnerable,
> one of the others could be used immediately.
>
> This was a reply to a post by Paul Crowley, where he suggested that in
> the unlikely event of such a thing happening, the satellite's software
> could be updated from the ground.
>
> There was no specific reference to the AES process in that post,
> however.
>
> In part, he said:
>
> >In the satellite example, adding fixes to the crypto may be possible,
> >if there are a limited number of ground stations communicating with
> >the satellite.
>
> >But when a cipher is to be used by many, to change it is to introduce
> >incompatibilities, delays, and vast costs. And during the change-over
> >delay we have the poor situation of no acceptable cipher at all.
>
> >To avoid this, any cipher system designed for mass use should be built
> >so that alternate ciphers can be used. And to reassure ourselves that
> >this feature remains available, it should be used regularly.
>
> John Savard (jsavard<at>ecn<dot>ab<dot>ca)
> http://www.ecn.ab.ca/~jsavard/crypto.htm
Please do not fail to note the weaknesses inherent in a system whose cryptologic
capabilities can be updated remotely. I suspect the cost of securing the update
capability would be larger than the cost of installing multiple ciphers in the first
place.
In the update case the need to replace a cipher implies that there may be no secure
way to do so. After all the cipher being replaced is insecure.
In the disable case (multiple ciphers supported; one found weak & disabled) we can
assume that the remaining ciphers are still secure, and perform the disable of the
deprecated cipher under their protection.
------------------------------
From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: Simple Encryption ...
Date: Mon, 10 Jan 2000 17:37:46 -0500
Derek Potter wrote:
>
> There are times when it would be cool (Ok, so
> you now know this is going to be one of *those*
> questions!) to be able to encrypt something not
> so much to stop it being read by third parties
> but to prove to the *recipient* that you have
> some information but you aren't prepared to
> divulge it *yet*.
That's easy.
Put the description in a text file, and sign it
with a "detached signature" (as PGP calls it).
Publish the signature. Keep the original
file secret.
At some later time you can publish the text file,
and anyone can then verify that the previously
published signature is valid for that file.
paul
------------------------------
From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: Intel 810 chipset Random Number Generator
Date: Mon, 10 Jan 2000 17:33:55 -0500
"Trevor Jackson, III" wrote:
> ...
> All true. All irrelevant.
>
> A trivial alternative to this design is one in which the race condition is designed
>out of existence. The software polling should be able to access both the "data
>ready" status bit and the data sample in a single (read) bus transaction. Sensible
>computer architects have been doing this for decades. Intel doesn't seem to care
>about the quality of their architecture.
I have some sympathy for that last sentence but the rest is clearly
not valid. At least not in any architecture I know, and I know about
eight of them.
> ...
> The alternative, letting each app have safe, direct access to the hardware, makes it
>a bit less probable that an error in the manager, or a compromised manager, will
>compromise security.
You seem to have a DOS oriented view of system design: the notion
that applications will have direct access to the hardware.
Such notions don't fit the world of real operating systems (such
as Unix, or VMS, or OS/360). There, apps have no access to hardware,
but only to system services which abstract the hardware and provide
sharing of resources.
> >It warns about multi-threaded apps using the RNG, but
> >nothing about different apps using the RNG. Apparently
> >bad things can happen if 2 apps try to use it at once.
> >
> >This looks pretty brain-damaged to me.
This makes no sense, because there is no fundamental difference between
threads and processes. This is rather obvious in Linux, where a single
system service is used to create either (the difference is merely a flag
that controls whether memory is shared or copied).
paul
------------------------------
Date: Mon, 10 Jan 2000 18:10:59 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Intel 810 chipset Random Number Generator
[EMAIL PROTECTED] wrote:
> In article <[EMAIL PROTECTED]>,
> "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
> > Because the example for a multi-threaded app works when applied at
> the OS level. To the OS the applications look like threads. The fact
> that some share address spaces (are threads within a single app)
> doesn't change the model for the OS. If the OS provides a semaphore
> for controlling access to the RNG any number of apps can safely share
> the resource.
>
> Ok, fine, but there are a zillion other ways to generate random
> numbers at the OS level. I am not planning on going into the
> OS business, and I don't know anyone who is. Intel's literature
> talks about making random numbers available to apps. Microsoft
> may not be interested in implementing it. It would have been
> nice is Intel had supplied something more convenient. As it is,
> reading the microphone is more convenient.
But those other ways can be duplicated in real time and, while random, your numbers
will not be unpredictable. Please note this point carefully. THERE IS NO TEST,
STATISTICAL OR NOT, FOR UNPREDICTABILITY.
Sorry for the shouting, but there's an incredible amount of blur between
compressibility, independence (randomness), and unpredictability.
Thus the value of an opaque RNG is real, whereas the value of user interactions, disk
head jitter, and mic hiss, which are subject to eavesdropping, is negligible.
------------------------------
Date: Mon, 10 Jan 2000 18:22:21 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Intel 810 chipset Random Number Generator
Paul Koning wrote:
> "Trevor Jackson, III" wrote:
> > ...
> > All true. All irrelevant.
> >
> > A trivial alternative to this design is one in which the race condition is
>designed out of existence. The software polling should be able to access both the
>"data ready" status bit and the data sample in a single (read) bus transaction.
>Sensible computer architects have been doing this for decades. Intel doesn't seem to
>care about the quality of their architecture.
>
> I have some sympathy for that last sentence but the rest is clearly
> not valid. At least not in any architecture I know, and I know about
> eight of them.
Good, then we're about even.
> > ...
> > The alternative, letting each app have safe, direct access to the hardware, makes
>it a bit less probable that an error in the manager, or a compromised manager, will
>compromise security.
>
> You seem to have a DOS oriented view of system design: the notion
> that applications will have direct access to the hardware.
Hardly. There is nothing particularly _bad_ about direct access to hardware. It
simply requires the correct permissions, gated by the OS, and enforced by the memory
manager or equivalent.
>
>
> Such notions don't fit the world of real operating systems (such
> as Unix, or VMS, or OS/360). There, apps have no access to hardware,
> but only to system services which abstract the hardware and provide
> sharing of resources.
Of course they do. I'm not that familiar with OS/360 internals, but I'm certain
there's a legitimate way to provide such a capability. It's trivial under Unix and
VMS. Perhaps you've spent more time in the virtual box than you have outside of it.
>
>
> > >It warns about multi-threaded apps using the RNG, but
> > >nothing about different apps using the RNG. Apparently
> > >bad things can happen if 2 apps try to use it at once.
> > >
> > >This looks pretty brain-damaged to me.
>
> This makes no sense, because there is no fundamental difference between
> threads and processes. This is rather obvious in Linux, where a single
> system service is used to create either (the difference is merely a flag
> that controls whether memory is shared or copied).
Agreed. See parallel threads.
------------------------------
From: "Gary" <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 23:58:01 -0000
Douglas A. Gwyn wrote in message <[EMAIL PROTECTED]>...
>Gary wrote:
>> I've always wondered if 100% 1-1 compression is impossible for all
domains
>> and ranges.
>
>What does that mean?
It means most of the time 1-1 seems to work.
>> Given that compressed data has two streams 1) non redundant data and 2)
>> encoded redundant data.
>
>What are you talking about? The output of a compression algorithm is
>*all* encoded. Some encodings may embed uncompressed patterns in the
>(distributed) dictionary, but other schemes don't.
All compressed data contains references to uncompressed data eg in RLE there
are runs of non redundant data (prefixed by a code) followed by a code
indicating a run of the previous non redundant data output.
In tree based systems strings from the stream are added to
dictionaries(created on the fly).
Non redundant original information forms the basis of all lossy compression
algorithms.
>
>> Take any random data that when looked at as compressed data has
redundancy
>> in the non redundant stream. Surely when this is decompressed and then
>> recompressed it will be more compressed than the original.
>> This implies that only by using redundancy as and indication of
redundancy
>> can 1-1 compression be achieved.
>
>Nope.
Why nope?
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************