Cryptography-Digest Digest #464, Volume #10 Fri, 29 Oct 99 11:13:03 EDT
Contents:
Re: VXD Memory Allocator for Win9x ("Rick Braddam")
Re: Build your own one-on-one compressor (Mok-Kong Shen)
Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY)
Re: the ACM full of Dolts? (SCOTT19U.ZIP_GUY)
Re: Disk wiping code or utility ("LBMyers")
Re: This compression argument must end now (SCOTT19U.ZIP_GUY)
Re: Unbiased One to One Compression (SCOTT19U.ZIP_GUY)
----------------------------------------------------------------------------
From: "Rick Braddam" <[EMAIL PROTECTED]>
Subject: Re: VXD Memory Allocator for Win9x
Date: Fri, 29 Oct 1999 04:45:30 -0500
Eric W Braeden <[EMAIL PROTECTED]> wrote in message
news:3818f657$0$[EMAIL PROTECTED]...
> You sure you want to write a VxD? Win95/98 will be
> around for quite a while yet, but WDM is the wave of
> the future. It will work on all future WinXX OSes.
>
> Eric
>
Yes, at least for now. The VxD model is easier for me to understand, and I already
have a skeleton running. The step from VxD to WDM
should be smaller than the jump straight to WDM.
I think that future WinXX OSes will be closer kin to NT than 9x. NT Workstation isn't
that much more expensive than the upgrade to
98 from 95. There may still be a lot of Win3.1 software running out there, and the
service packs for Win98 indicate that MS isn't
ready to orphan the 3.1 software yet. I think the time is coming, though.
Note to Paul Koning: I followed the link you suggested, but I didn't find anything
which would help much trying to determine the
"bookkeeping" strategy. I have a copy of one malloc (for *nix machines) and the source
for Win9X's malloc... neither seems to me
quite suitable. I looked at NT's allocation, and tried to find a way close to it.
Rick
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 29 Oct 1999 14:02:18 +0200
Mok-Kong Shen wrote:
>
>
> If I understand correctly, you are now on compression not operating
> on groups of 8 bits but on group of bytes. That certainly removes
> the remaining bits problem at the last byte of file on decompression
> of an arbitrary file which is a factor giving rise to the original
> one-to-one problem. However: (1) the frequency counts of such group
> of bytes takes much larger work and the Huffman tree is much much
> larger, (2) due to the larger granularity the compression ratio is
> likely to be poor, I believe.
Addendum: I am afraid that you don't even solve the original one-to-one
problem at all but simply have shifted that from the bits level to
bytes level. For on decompressing an 'arbitrary' sequence of bytes,
there is a chance that at the end of the file there will be a number
of bytes which do not consititute a valid code symbol (analog to the
original case where there are a few bits remaining that don't
consitute a valid Huffman code).
M. K. Shen
==========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 29 Oct 1999 13:03:38 GMT
In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]>
wrote:
>SCOTT19U.ZIP_GUY wrote:
>>
>
>> TIm I did a quick look at it and yes I feel it would work. I have been
>> thinking of a more complicated version but yours is more straight forward.
>> I hope to is clear to users that as you search the long strings are
>> replaced by the short one and that if a short string is found it is replaced
>> by the long one.
>>
>> The weird way I was thinking of doing it was things like "th" replace with
>> "q" if "th" not followed by "u" this way since q is usually followed by u
>> then it would not be replaced and since th seldom has a u after it it would
>> generally be replaced by a q
>> So you can see mine would be messier.
>>
>> An alternate approoach is to allow for more than 256 symbols. You can expand
>> from 8 bits to 9 bits with leading bit a zero for common stuff that way
>> ( note the 9 bit or even 16 bit is a temporary file to allow for more than
>> 256 leaves at start mabye only 267 leaves in huffman tree)
>> "th" could be replaced by a new symbol when at the compression stage since I
>> can easily allow 512 symbols with a slight mod to the current compressor.
> Know
>> on decompression if a "th" not there as a special symbol. It would be count
> as
>> a repeat of "th" where combinations of "th" and specail new symbol would
>> represent the repeat count. This is what I am playing with lately.
>>
>> Keep up the good work
>
>If I understand correctly, you are now on compression not operating
>on groups of 8 bits but on group of bytes. That certainly removes
>the remaining bits problem at the last byte of file on decompression
>of an arbitrary file which is a factor giving rise to the original
>one-to-one problem. However: (1) the frequency counts of such group
>of bytes takes much larger work and the Huffman tree is much much
>larger, (2) due to the larger granularity the compression ratio is
>likely to be poor, I believe.
>
>M. K. Shen
You lost me Mok. I was using groups of 8 bits to mean bytes. Some
people don't use the common defination of byte to mean 8 bits. There
was no change. To clarify
"8bits==byte"
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression: A ? for David Scott
Date: Fri, 29 Oct 1999 13:09:39 GMT
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Rebus777)
wrote:
>Rebus777 Wrote:
>>>Since the object of this compression is to aid security in encryption,
>>>would it not be a plus to add a simple random stream xor to hide
>>>these patterns? This might allow you to improve the compression
>>>algo, since any crc info would be disguised. :)
>>
>
>David Scott replied:
>> Look it is not for every thing. But it does work where one could have
>>used adaptive huffman compression and it does not add info. In a few
>>months I will have others (I hope). But if your data such that it has
>>lots of reapation after the compression pass than when you reverse
>>the file and give it a second pass it should get smaller. The second
>>pass makes it harder to break where one tries to exaimne a few blocks
>>it forces the attacker to at least do one full decompression pass before
>>any partial data becomes visable.
>>David A. Scott
>>
>
>Wouldn't it be much simpler and more efficient to just us a very good
>compression without headers and simply hide it's output with a random xor or
>chaining as (Tom, [EMAIL PROTECTED])
>suggests?
IF your saying use the best compressor possible and use a OTP.
Which is the only proveable good way to use random data. Yes this
would work. However if one is going to use an OTP in the first place
there really is no need for any other type of encryption. The problem
with an OTP is getting the random data. IF you have an excellent
source and a secure way of getting this random data to you and you
buddy who is decrypting. Then you don't need to be concerned with
99% of the stuff on encryption that everyone is talking about since
the OTP which relies on random data is proveably secure.
For that matter just XOR your data with the random data in the
first place and kiss off compression all together.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: the ACM full of Dolts?
Date: Fri, 29 Oct 1999 13:32:35 GMT
In article <7vb6mk$3ri$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (David Wagner) wrote:
>In article <7v6ubj$jj2$[EMAIL PROTECTED]>,
>SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>> The first cut of ACM paper in htm at my
>> webpage. for there Crossroads online magazine.
>>
>> http://members.xoom.com/ecil/dspaper.htm
>
>I think there might be a terminology problem here.
>
>You say you want
> [...] a mapping from one form to another in such a way that any thing in
> one set maps uniquely to another item in the other set. Also every item
> in that second set maps back to every item in the first set.
>This is known, not as a one-to-one function, but as a bijective function.
Then you can use the term bijective but shit is still shit wheather
you call it carp or dung so big deal.
>
>A function f is one-to-one (or injective) if x!=y implies f(x)!=f(y).
>In other words, this is your first criterion.
>
>A function f is onto (or surjective) if every element in the range set has
>an inverse. This is your second criterion.
>
>A function that is both one-to-one and onto (i.e., both injective and
>surjective) is called bijective. This is the notion you want.
>
>
>Finally, I should mention that it seems what we'd _really_ like from a
>compression function f is that its output should be (approximately) uniformly
>distributed when its input is generated according to the distribution we
>expect our plaintext to have.
Yes this is the classical defination of compression. And if one only
wants to compress this is a good feature. But if you read any of the
threads at all on compression for ecryption you should have the smarts
to see that what I call "one to one" TIm calss "one on one" you call
only "bijective" is an important aspect for compression that is used
just prior to encryption.
>
>In other words, view the original message M as a random variable.
>(It might have the distribution you'd expect for English sentences, or
>whatever you like.) We may also view the compressed text T = f(M) as a
>random variable in the obvious fashion. Then, it'd be nice if T was
>roughly uniformly distributed [*]. If we could say that T was roughly
>uniformly distributed, then it'd mean that ciphertext-only attacks on
>the encryption algorithm are essentially impossible (just by Shannon's
>information theory, since we can hope there won't be enough redundancy
>in T, the input to the cipher, to even approach the unicity distance).
If you read my discussion on why the standard adaptive huffman is
bad and how a cipher only attack is feasable with it compared to the
"one to one" compression I changed it to you would have a better
unserstanding of the problem. To take the high road on compression
and say one should use better compression is the same as saying
all other forms of encryption are shit except the OTP which is provably
secure.
>
>In practice it seems extremely difficult to get very close to the
>uniform distribution for T. However, _heuristically speaking_, we can
>expect that the closer T is to uniform, the harder it will be to mount
>ciphertext-only attacks.
You failed to completely note that one should not use compression
that adds info for the attacker to use. Why do you totally gloss over this
important point. Or do you think that a compression routine that can
compress even a random file in such a way that only one possible
correct key can exist for the encrypted text to decompress is of
little worry. I really expected more from you. Mr BS has buddies at the
NSA and braggs frequently on that fact here so that he I guess he
can't be to honest about it. But I hoped you still had a little more
independence my daughter made it through your school in only 3
years. I guess the dumb ones take longer. But then again she
has my genes and my modes of thinking snd it would be to much
to expect many other bright people at your University.
Why don't you or Mr BS coment on the compress and encrypt threads
by Tim Tyler. Obviously you and I hate each other since you openly
declared scott19u dead by your slide attack and latter sort of stated
that you could not decipher my compilable C code and that was the
reason your Great Slide Attack failed against it.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: "LBMyers" <[EMAIL PROTECTED]>
Subject: Re: Disk wiping code or utility
Date: Fri, 29 Oct 1999 08:49:02 -0400
There is also the situation where you want to replace your disk with a new
larger one. You may want to completely erase the old one before you get rid
of it.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: This compression argument must end now
Date: Fri, 29 Oct 1999 13:41:35 GMT
In article <7vbqth$p01$[EMAIL PROTECTED]>, "Tim Wood" <[EMAIL PROTECTED]>
wrote:
>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA1
>
>
>>The only real problem seems to be:
>>
>>: I thought the point of commpression before encryption was to remove
>>: redundancy in the plaintext. This makes it less lightly that the
>>: same plaintext will be encrypted twice.
>>
>>...the second sentence above. Compression /generally/ doesn't affect
>the
>>liklihood of the same *entire* plaintext being encrypted twice.
>
>Sorry I didn't make myself clear, I agree; it does however mean that
>the same senctence contained twice within the same plaintext should be
>encrypted once (as opposed to twice), although there are other ways
>during encryption of limmiting this effect (such as CFB mode) it is
>always better to reduce the potential attacks open to a cryptanalyist.
> There may be attacks which, require lots of known/same ciphertext (if
>the same plaintext produces two diffrent results in a cipher it is
>possible that at some future date this would enable the key
>stream/schedule to be modled - cryptology is in the buisness of
>predicting possible attacks.) Compression thus seems a relativily
>in-expensive way to prevent _possible_ future attacks. It limits both
>the number of reuccurences and amount of ciphertext/plaintext (yes it
>limits the plaintext too) which is encrypted. This is already
>importaint.
>
>>
>>[It's /possible/ to make compressors that do this. In fact a
>compressor
>> that fails to be one-on-one pretty much *must* follow the policy of
>> choosing between all possible compressed files randomly, with equal
>> probability if it wants to completely avoid any security issues.
>> However, the general idea is not terrbly useful.]
>
>I agree, the probability of a person sending exactly the same message
>twice with the same key is low (and should be avoided), applications
>that regularly do this should generate session keys. The difficulty
>arrises when the compression adds data in a structured way to the
>plaintext, this can *vastly* increase the amount of known
>plaintext-ciphertext available.
I think I have showed how the standard adaptive huffman compressor
can add data in a way that can be exploted. This is avoided in the
one to one compression methods.
>
>Another point that was raised was whether the attacker is lightly to
>be able to tell when he has succsefuly broken a cipher. I do not
>belive that this is a good basis for the security of a cipher,
>although, it is an interesting property, if an attacker has brute
>forced a cipher it is probably already too late (presumably you would
>not be transmitting random data), the security of a cipher should not
>depend on the attacker not knowing what format the plaintext is -he
>could for example have the plaintext, thus he would know when he had
>the key.
I think we sort of agree here. The problem is that with non 1-1
compression the attacker my not even need to know the form of the
file to break the encryption. This is wht non 1-1 compression should be
avoided.
>
>>
>>If a *part* of a plaintext is encrypted twice in otherwise different
>>messages, compression /may/ change this - or it may not.
>
>And may, with a high enough probability, is often good enough to
>prevent explotaition. However is was refering to the same plaintext
>twice in the same message
I think my compression routine generally avoids 2 lone segments
of text being compressed the same way since the tree changes at
the process of each charter. OF course of you keep repeating the
same text over and over the tree will eventually approach a constant
value.
>
>A good cryptographical compression routine would;
>
>Reduce redundancy in the plaintext.
>
>Not add any regular structure/data to the plaintext.
> (This includes not having a biased towards any set of the possible
> compressed texts. or at least ones of equivilant commpression)
By not add any sturcture I would hope you mean one to one.
>
>Increase the entropy/information per bit of the message.
>
>Is this a good specification? (that *is* what is lacking here)
>
>later
>
>tim :)
>
>>--
>>__________
>> |im |yler The Mandala Centre http://www.mandala.co.uk/
>[EMAIL PROTECTED]
>>
>>Real programs don't eat cache.
>-----BEGIN PGP SIGNATURE-----
>Version: PGPfreeware 6.0.2i
>
>iQA/AwUBOBnDlQxsvHBiP5HDEQIcCgCfbGzxEL/fN1/GejGHBtl9RA2DkpkAnRH9
>a7eB/o6g3aqguRyuzo+15qJi
>=pnuT
>-----END PGP SIGNATURE-----
>
>
>
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Unbiased One to One Compression
Date: Fri, 29 Oct 1999 13:43:55 GMT
In article <[EMAIL PROTECTED]>, "Xav2" <[EMAIL PROTECTED]> wrote:
>"SCOTT19U.ZIP_GUY" writes:
>> In article <[EMAIL PROTECTED]>, "Trevor Jackson, III"
> <[EMAIL PROTECTED]> wrote:
>> >There is no file ending problem any more than there is a file sector padding
>> > problem. Simply
>> >pad the byte with random bits.If you are truly concerned use a communication
>> > mechanism that
>> >tracks bits instead of bytes.
>>
>> I am perffectly happy with files that are made up of 8 bit bytes. However
>> there is an ending problem in Normal Huffman coding. Since one would
>> not know if the random fill bits added are random padding or the next token.
>
>I'd guess one will just have to reserve a spot in the huffman tree as an
>EOF then.
No this is not what you have to do. I have solved the ending problem.
There is no need for an EOF token on the tree. See
http://members.xoom.com/ecil/compress.htm
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************