Cryptography-Digest Digest #100, Volume #13 Sun, 5 Nov 00 01:13:01 EST
Contents:
Re: hardware RNG's (Tim Tyler)
Re: hardware RNG's (David Schwartz)
Re: BENNY AND THE MTB? (Tim Tyler)
Re: Is OPT the only encryption system that can be proved secure? (Benjamin Goldberg)
Re: Hardware RNGs (Benjamin Goldberg)
Re: End to end encryption in GSM (Benjamin Goldberg)
Re: sqrt correlations (Benjamin Goldberg)
Re: Give it up? (Benjamin Goldberg)
Re: BENNY AND THE MTB? (Tim Tyler)
Re: hardware RNG's (Tim Tyler)
Re: BENNY AND THE MTB? (SCOTT19U.ZIP_GUY)
Re: Randomness from key presses and other user interaction (Tim Tyler)
Re: Hardware RNGs (Tim Tyler)
Re: Give it up? (SCOTT19U.ZIP_GUY)
----------------------------------------------------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 02:53:06 GMT
David Schwartz <[EMAIL PROTECTED]> wrote:
: Terry Ritter wrote:
:> There is something wrong with this logic! If various signals in the
:> area do affect the noise RNG, then our device is no longer based
:> solely on quantum effects. That is very dangerous because it means
:> that one of the other effects which it uses might be controlled,
:> perhaps fairly easily. I claim that what we want to do is to isolate
:> the noise signal from every other reasonable effect.
: This is probably the fundamental source of my disagreement with you.
: There is absolutely no need to isolate the noise source from all other
: possible sources, provided the noise is still there. [...]
True - but it's a lot harder to figure out how much "real" noise you
have got if it's mixed in with alot of "fake" noise - which might
eventually turn out to be rather deterministic.
"Provided the noise is still there" might be a bit of an act of faith - if
you can't actually see or measure the noise any more because it's swamped
with possibly-pseudo-random garbage.
A major reason for trying to isolate a noise source is so that you can
examine its properties.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ ILOVEYOU.
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Date: Sat, 04 Nov 2000 19:33:58 -0800
Tim Tyler wrote:
>
> David Schwartz <[EMAIL PROTECTED]> wrote:
> : Terry Ritter wrote:
>
> :> There is something wrong with this logic! If various signals in the
> :> area do affect the noise RNG, then our device is no longer based
> :> solely on quantum effects. That is very dangerous because it means
> :> that one of the other effects which it uses might be controlled,
> :> perhaps fairly easily. I claim that what we want to do is to isolate
> :> the noise signal from every other reasonable effect.
>
> : This is probably the fundamental source of my disagreement with you.
> : There is absolutely no need to isolate the noise source from all other
> : possible sources, provided the noise is still there. [...]
>
> True - but it's a lot harder to figure out how much "real" noise you
> have got if it's mixed in with alot of "fake" noise - which might
> eventually turn out to be rather deterministic.
Right. You can't see it in the final data, so you have to analyze the
parameters that make the data up. In other words, you determine that the
randomness is there from theoreticaly analysis, not from looking at the
data.
> "Provided the noise is still there" might be a bit of an act of faith - if
> you can't actually see or measure the noise any more because it's swamped
> with possibly-pseudo-random garbage.
*sigh* You missed my point. It doesn't _matter_ if it's swamped with
pseudo-random garbage. 99% of the data can be garbage. So long as the
noise is in there somewhere, you are set.
> A major reason for trying to isolate a noise source is so that you can
> examine its properties.
Right, that's why the theoretical argument is important. Just looking
at the data and saying "it looks random" is not really good enough.
The fact is, uncompensated crystal oscillators do in fact drift
unpredictably. Equally important, the frequency multipliers used in
motherboards to generate the FSB frequency drift unpredictably as well.
The multiplier in the CPU that multiplies the FSB frequency into the
core frequency, however, does not.
So the theoretical argument for, for example, sampling keystrokes, is
that the keyboard controller and the microcontroller in the keyboard
have independent uncompensated crystal oscillators. The offset between
those two oscillators, even at one particular instant, has large amounts
of entropy because it is the ratio of two real numbers. Sampling the TSC
at the time when a keystroke is hit, then noticed by the microcontroller
in the keyboard, then noticed by the keyboard controller, and then
noticed by the TSC accesses this entropy.
Yes, it's mixed with a lot of deterministic stuff. So what? All the
mixing and addition in the world won't remove the randomness.
DS
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: BENNY AND THE MTB?
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 04:04:45 GMT
Bryan Olson <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote
:> Bryan Olson <[EMAIL PROTECTED]> wrote:
:> : Tim Tyler wrote
:> :> [EMAIL PROTECTED] wrote:
:> :> : The argument is fairly simple. If you chop all but 8-bits of
:> :> : a Rijndael block, the decryption is one of 2^120 possibilities. []
:> :>
:> :> [...] your premise is wrong - Matt does *not* "chop all but 8 bits"
:> :> from a Rijndael block.
:>
:> : Though in the case in Dave's story, that is what the program did.
:>
:> I don't /think/ so [...]
: If the program outputs a one byte message, then it did in
: fact take just the first byte of the Rijndael block.
That's completely different from "chop[ing] all but 8-bits [from]
a Rijndael block" such that "the decryption is one of 2^120 possibilities".
What you are now talking about has nothing to do with what Joseph Ashwood
was talking about.
It's not even clear to me that your statement is true. There are
*many* mapping between blocks and 8-bit granular files, which would not
fit your description. Do you know which map is actually being employed?
:> It may be that David told about the decryptors /expecting/
:> the program to behave in the way you mention - after
:> hearing it used Rijndael.
: He definitely stated the ciphertext was one byte. That's
: a 1 in 2^120 shot to happen the way the story tells it.
: If you don't beleive me [...]
I believe you - but that wasn't my objection in the first place.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Niagra falls.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Is OPT the only encryption system that can be proved secure?
Date: Sun, 05 Nov 2000 04:33:09 GMT
Douglas A. Gwyn wrote:
>
> "SCOTT19U.ZIP_GUY" wrote:
> > Doug I'm glad you though of it but. SCOTT19U needs to be able
> > to go through the file in two different directions so I am not
> > sure the stream idea is useful because I have to travel up the
> > stream and down the stream unless I revese the file each time.
> > Unless there is come tricky way to start at back of a file and
> > wirte the backend first.
>
> I was working from an indication that the entire message would
> be stored in memory, in which case the C array has the right
> properties. Unless you require that for the reverse pass the
> 19-bit alignment needs to be against the end of file (padding
> at the front), which makes a difference for 18/19 of all files.
> Since for 1/19 of all files the alignment is the same either
> way, you cannot reasonably be relying on it for security, so
> I'd suggest using beginning-of-file alignment for both pass
> directions.
>
> The alternative of not buffering the file internally forces
> the program to try to read "backwards", i.e. use fseek() to
> backspace through the whole file, which will be horribly slow
> unless large buffers are used. Anyway, the backward-reading
> functionality should likewise be isolated into a separate
> module that specializes in just that and nothing else, then
> it can be used with functions similar to the ones I posted to
> accomplish a 19-bit chunking in the reverse direction. The
> essential idea, regardless of exactly what kind of I/O is being
> done, is to use a module that performs native-byte to 19-bit
> chunking (and the reverse transformation) independently of how
> the resulting data is to be used.
Another possibility is that to use some sort of memory-mapped io.
Pros:
1) It is effectively the same as having the entire file in a big buffer,
which makes a speed improvement when working in the backwards direction.
2) It may be possible to *easily* make the program such that it never
goes into the swap file (because the original file is the swap
destination).
3) You may be able to do in-place encryption.
Cons:
1) Many systems don't have it.
2) Different systems which have it often have different interfaces.
3) If the program is halted, and you're doing in-place encryption, then
you're stuck with a partially encryped file, and there is little you can
do with it.
4) There's no place to stick an IV with in-place encryption.
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Date: Sun, 05 Nov 2000 04:33:12 GMT
Alan Rouse wrote:
>
> Hashing does not increase entropy, whether one pass or multiple.
>
> If the input comes from a population of x equally probable values,
> there will be exactly x equally probable outputs from a strong hash
> function. If the probabilities of the possible inputs are not equal,
> then the entropy is reduced. (An attacker can attack the most
> probable inputs first. The expected number of trials to find the
> actual input value would be less than half of x).
>
> If the number of repetitions of the hash function varies randomly then
> that would add a bit or two of entropy... but if the number of
> repetitions is deterministic then it adds no entropy.
True in a trivial sense, but part of the problem is that we rarely have
equiprobable values. We may have a a hardware RNG that is biased, or
correlated, or both.
Suppose that our hardware RNG is not correlated, but is biased; it
outputs 1s 1/8th of the time, and 0s 7/8ths of the time. This means
0.54 bits of entropy per bit of output. How do we get one bit of
randomness? It's not possible to directly take 1.84 bits of generator
output to get 1 bit of randomness; What we need to do is something like
take 295 bits of generator output, use SHA to get a 160 bit hash. The
amount of entropy in 295 bits of generator output should be 160.35 bits,
and since hashing doesn't decrease the amount of entropy, all 2**160
hash outputs should be nearly equiprobable, and we can take *that* one
bit at a time, and expect that each bit of hash contains one bit of
randomness.
In real life, we can't measure the bias this accurately, and the outputs
are also correlated, too, which further decreases entropy per bit.
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Crossposted-To: alt.cellular.gsm
Subject: Re: End to end encryption in GSM
Date: Sun, 05 Nov 2000 04:33:14 GMT
matt weber wrote:
[snip]
> The commercial product's encryption capability will be severly
> constrained by US export law. However at the end of the day you have
> to ask what are you attempting to protect against.
> It is like 40 bit versus 128 bit encryption. In theory 128 bit
> encryption is much more secure. The problem is when you think about
> who would care, you realize the people who can easily defeat 40 bit
> encryption, are also going to be able to defeat 128 bit encryption.
Not applicable. US export laws of the time said that you could only
have 40 bits of key. Although the encryption used had a key that was
at least 40 bits long, it only took something like 25 bits of searching
to decrypt. For an analogy, consider zip encryption; You can have up
to 8 letters of password (I think), but it can be broken with a known
plaintext attack using significantly less than 2**64 work, even ignoring
a dictionary attack.
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: sqrt correlations
Date: Sun, 05 Nov 2000 04:33:17 GMT
Douglas A. Gwyn wrote:
>
> Benjamin Goldberg wrote:
> > Could someone tell me where (on the web) I could find a paper, or
> > papers, describing how correlations in the digits of a square root
> > can allow an attacker to learn the original number?
>
> Any digit sequence is the square root of *some* number, so there are
> no "correlations" in general. Also, given the square root, simply
> squaring it gives the original number. I guess your concern must be
> for digits starting somewhere "in the middle".
What I was thinking of was taking the square root of a 10-digit decimal
number, and outputing the digits to the right of the decimal point of
the sqrt.
> Consider the following:
> 1.00030002 squared =
> 1 + .0000009 + .000000000000004 the easy ones
> + .0006 + .00000004 + .00000000012 interactions
> It is evident that the digits do not affect anything in the original
> number to the left of their position. What they affect to the right
Which is (I think) another way of saying that each digit in the original
affects all digits to the right of that position in the square root.
> depends on how far to the right of the decimal (or binary) point the
> current position is. One might think of setting up a system of
> equations for the next yay many digits (reflecting that in the
> original number the sum of all contributions is 0, for integer), but
> note that there are an unknown (large) number of contributions from
> past (unknown) digits, at unknown locations (due to not knowing the
> current position). I don't know of any mathematics that can harness
> this degree of variability. On the other hand, if the domain from
In other words, given only a portion of the keystream, we can't predict
what values come before or after it.
> which the original number was selected is small enough, we can simply
> search it for a sufficiently long match to a stretch of the square
> root digits among the square roots of all possible original values
> (carried out to as many places as might have been used in the
> cryptosystem). In practice there would have to be some indicator for
10 decimal digits is 32 bits of entropy. This is searchable by
computer, but not by hand. Changing the key to 20 digits solves this.
> the starting digit position (right of the decimal point), which would
> be visible to the cryptanalyst as well as to the intended recipient.
Actually, since the seed (the original number) is taken to be an
integer, and is thus entirely to the left of the decimal point, I was
thinking of simply throwing away the decimal portion of the sqrt, and
using all of the fractional portion.
For a pencil and paper cipher, we would further confuse the stream by
decimating it -- each plaintext letter is written either as two GF(6)
digits (a-z + 0-9 == 36 symbols) or as two GF(5) digits (a-z/v or a-z/j
is 25 symbols). For a stream in GF(6), simply discard values over 5.
For a stream in GF(5), either discard values over 4, or subtract 5 from
values over 4 (essentially discarding 1 bit per digit).
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Give it up?
Date: Sun, 05 Nov 2000 04:33:20 GMT
SCOTT19U.ZIP_GUY wrote:
> THe funny thing is that its easy to fix such schemes so that the
> compressed output will always be the same size or smaller. THe fix is
> not rocket science.
Anyone who claims that it's easy [or even that it's possible] to make
compressed output always be the same size or smaller than the input,
clearly does not understand the counting argument.
> Most can see many of the obvious header errors as being a mistake
> but most fail to see beyond that.
If one is compressing and encrypting in one program, one might choose to
have headers, so the type of file can be identified, but not encrypt
them. That is, if I made a program that compressed then encrypted, I
might have the output file format always start with "B.G." [yay ego!]
but the encryption would start with the character after that, so that
although there's stuff in the file that's known, there isn't encrypted
stuff in the file whose corresponding plaintext is known or partially
known.
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: BENNY AND THE MTB?
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 04:21:53 GMT
Matt Timmermans <[EMAIL PROTECTED]> wrote:
: Because final mapping (tivial decoding) from FOstreams to byte-granular
: files is bijective, the encryption can perform any reversible operation on
: the FOstream without changing the bijective nature of the entire process,
: including operations that change the number of significant bits.
I think this is what I didn't grasp. It's obvious, really ;-)
One remaining issue seems to be that you can't be encrypting 0x00000000 -
so one cyphertext will go missing (unless you are careful). Also what
happens if you encrypt to 0x00000000 (not a FO file)?
Perhaps you get around that by subtracting one before encrypting, and then
adding one afterwards again ;-)
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ ILOVEYOU.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 05:00:07 GMT
David Schwartz <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> David Schwartz <[EMAIL PROTECTED]> wrote:
:> : Terry Ritter wrote:
:> :> There is something wrong with this logic! If various signals in the
:> :> area do affect the noise RNG, then our device is no longer based
:> :> solely on quantum effects. That is very dangerous because it means
:> :> that one of the other effects which it uses might be controlled,
:> :> perhaps fairly easily. I claim that what we want to do is to isolate
:> :> the noise signal from every other reasonable effect.
:>
:> : This is probably the fundamental source of my disagreement with you.
:> : There is absolutely no need to isolate the noise source from all other
:> : possible sources, provided the noise is still there. [...]
:>
:> True - but it's a lot harder to figure out how much "real" noise you
:> have got if it's mixed in with alot of "fake" noise - which might
:> eventually turn out to be rather deterministic.
: Right. You can't see it in the final data, so you have to analyze the
: parameters that make the data up. In other words, you determine that the
: randomness is there from theoreticaly analysis, not from looking at the
: data.
That doesn't sound as good as using a theoretical analysis *and* looking
at the data.
:> "Provided the noise is still there" might be a bit of an act of faith - if
:> you can't actually see or measure the noise any more because it's swamped
:> with possibly-pseudo-random garbage.
: *sigh* You missed my point. It doesn't _matter_ if it's swamped with
: pseudo-random garbage. 99% of the data can be garbage. So long as the
: noise is in there somewhere, you are set.
Are you sure that it was not you who missed my point? I was talking
mainly about *measuring* the volume of the entropy - not *using*
whatever entropy is present.
: The fact is, uncompensated crystal oscillators do in fact drift
: unpredictably. Equally important, the frequency multipliers used in
: motherboards to generate the FSB frequency drift unpredictably as well.
This is not my field. However, are you sure that the drift is
unpredictable - and does not reflect environmental conditions
(such as temperature) which might be measured or controlled?
Certainly I for one would like to be able to eliminate such influences,
and test the result.
: So the theoretical argument for, for example, sampling keystrokes, is
: that the keyboard controller and the microcontroller in the keyboard
: have independent uncompensated crystal oscillators. [...]
Note that this is not the *usual* argument for sampling keystrokes.
There the entropy is assumed to come from the user - rather than crystal
oscillators.
: Yes, it's mixed with a lot of deterministic stuff. So what? All the
: mixing and addition in the world won't remove the randomness.
To my mind any issue in this area is not with using what you have - but
with finding out what you have got.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ ILOVEYOU.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: BENNY AND THE MTB?
Date: 5 Nov 2000 05:02:33 GMT
[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:
>Matt Timmermans <[EMAIL PROTECTED]> wrote:
>
>: Because final mapping (tivial decoding) from FOstreams to
>: byte-granular files is bijective, the encryption can perform any
>: reversible operation on the FOstream without changing the bijective
>: nature of the entire process, including operations that change the
>: number of significant bits.
>
>I think this is what I didn't grasp. It's obvious, really ;-)
>
>One remaining issue seems to be that you can't be encrypting 0x00000000
>- so one cyphertext will go missing (unless you are careful). Also what
>happens if you encrypt to 0x00000000 (not a FO file)?
Actaully he can be encrypting that. YOu are confusing normal
8-bit files with fintiely odd files. If one had a file of all zeros
then when you convert it to a FO file it has a 100...000 forever at
the end. You can also encrypt with last block all zeroes since the
1000.000 forever would be after it.
>
>Perhaps you get around that by subtracting one before encrypting, and
>then adding one afterwards again ;-)
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Randomness from key presses and other user interaction
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 05:06:43 GMT
David Schwartz <[EMAIL PROTECTED]> wrote:
: Mack wrote:
:> There seems to be some argument as to whether timing
:> keystrokes is a good source of randomness.
:>
:> So lets start a thread on that.
:>
:> 1) Key stroke timing is generally quantitized to about 20 ms
:> give or take.
: It's the give or take in the 20 ms that contains the entropy.
Well, *if* this is true, this is not "randomness from key presses and
other user interaction" - it's more randomness from clock signal drift.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Surf against sewage.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Hardware RNGs
Reply-To: [EMAIL PROTECTED]
Date: Sun, 5 Nov 2000 05:18:47 GMT
David Hopwood <[EMAIL PROTECTED]> wrote:
: -----BEGIN PGP SIGNED MESSAGE-----
: [EMAIL PROTECTED] wrote:
:> Paul Crowley wrote:
:> > Alan Rouse wrote:
:> > > Hashing does not increase entropy, whether one pass or multiple.
:> >
:> > No, of course not. However, at least it doesn't *reduce* entropy
[...]
:> Actually I will differ with you there, you seem to be making the common
:> mistake of assuming that SHA-(whatever) offers perfect entropy
:> consolidation [...]
:> we have no such proof, and my belief at least is to the
:> contrary. To test my theory of non-perfect entropy reduction would be
:> compute intensive or human intensive, but I believe that if you take a
:> start value, feed that into the hash function, feed the result into the
:> hash function, etc. I believe you will find a loop sooner than full
:> exhaustion (2^256 for SHA-256).
:> This reveals non-perfect entropy [...]
: Perfect entropy was not claimed.
AR:"Hashing does not increase entropy, whether one pass or multiple."
PC:"No, of course not. However, at least it doesn't *reduce* entropy."
In fact hashing 160 bits of entropy produces an output with a bit over 159
bits of entropy. Hashing *can* reduce entropy.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Surf against sewage.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Give it up?
Date: 5 Nov 2000 05:26:59 GMT
[EMAIL PROTECTED] (Benjamin Goldberg) wrote in
<[EMAIL PROTECTED]>:
>SCOTT19U.ZIP_GUY wrote:
>> THe funny thing is that its easy to fix such schemes so that the
>> compressed output will always be the same size or smaller. THe fix is
>> not rocket science.
>
>Anyone who claims that it's easy [or even that it's possible] to make
>compressed output always be the same size or smaller than the input,
>clearly does not understand the counting argument.
The above was referring to the current padding methods such as
what I think the IEEE standard is. IN it you always add padding.
WHen you do optimal end handling as opposed to the common blessed
schemes the optimal end handling can be made to always be the same
size or smaller.
I am not saying that 8bit granular files can be compressed to
files that are smaller than 8bit granular files. However if one
was trying to stick to files that are always some large multiple
of bits such as something that is at 128 bit granularity can be
mapped to same size or smaller files if going to 8bit granularity.
>
>> Most can see many of the obvious header errors as being a mistake
>> but most fail to see beyond that.
>
>If one is compressing and encrypting in one program, one might choose to
>have headers, so the type of file can be identified, but not encrypt
>them. That is, if I made a program that compressed then encrypted, I
>might have the output file format always start with "B.G." [yay ego!]
>but the encryption would start with the character after that, so that
>although there's stuff in the file that's known, there isn't encrypted
>stuff in the file whose corresponding plaintext is known or partially
>known.
>
Obviously if one was to add a fully bijective compression encryption
routine to a better PGP one would have to do what you say. Another
problem is Email. When I mail to friends that have my code we
commonly encountered problems because the first few letters confuse
the email porgram. I found out that to get aroung such problems I
needed to zip the files up. Which alwasy increased the size but then
email could handle them. Also you may want authenicarion and other
such stuff that could be outside of this layer.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************