Cryptography-Digest Digest #879, Volume #10 Mon, 10 Jan 00 18:13:01 EST
Contents:
Re: "1:1 adaptive huffman compression" doesn't work (Daniel Vogelheim)
Re: Square root attacks against DSA? (lcs Mixmaster Remailer)
Re: Questions about message digest functions (Tim Tyler)
Re: "1:1 adaptive huffman compression" doesn't work ("Gary")
Re: "1:1 adaptive huffman compression" doesn't work (SCOTT19U.ZIP_GUY)
newbie question ([EMAIL PROTECTED])
newbie question ([EMAIL PROTECTED])
Re: "1:1 adaptive huffman compression" doesn't work (John Savard)
Re: AES & satellite example (John Savard)
Re: Intel 810 chipset Random Number Generator ([EMAIL PROTECTED])
Re: Anyone give any pointers? ([EMAIL PROTECTED])
Re: Questions about message digest functions ([EMAIL PROTECTED])
Re: Cryptography in Tom Clancy (Johnny Bravo)
Re: "1:1 adaptive huffman compression" doesn't work (SCOTT19U.ZIP_GUY)
----------------------------------------------------------------------------
From: Daniel Vogelheim <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 20:54:00 +0100
Hallo Tim,
Hello group,
>: Toying with one of the provided implementations for a few minutes
>: supported this (http://members.xoom.com/ecil/ah2.zip, h2com.exe
>: version 19990825, h2unc.exe version 19990922). One can construct
>: multiple files that get "compressed" to the same output file. One such
>: file is included in this message below (uuencoded).
>
>If someone else doesn't beat me to it I'll look at this and report back.
Please do, that's why I posted.
>One idea behind such a scheme is essentially that if the EOF occurs while
>the decompressor is in the middle of a symbol, it *knows* that this can
>only happen if the decompressor chopped of a tail of zeros. This tail of
>zeros can be unambiguously reconstructed *provided* the file does not end
>with any all-zero Huffman symbols - and this case can be avoided fairly
>simply.
Aah, thanks, I get it. The scheme is to do zero-padding and prevent an
all-zero token (of length<=8). Then my claim was wrong and I apologize
for that; mea culpa. (I was probably confused by the web page, which
turns this into five rules, and talks about the all-zero-token and
chopping off ones, which I found more than strange.)
Anyhow, the files I downloaded from Scott's web page still don't
compress/decompress my example correctly, so I'd assume I found a bug
rather than a conceptual flaw.
Sincerely,
Daniel Vogelheim
------------------------------
Date: 10 Jan 2000 20:20:10 -0000
From: lcs Mixmaster Remailer <[EMAIL PROTECTED]>
Subject: Re: Square root attacks against DSA?
In message <[EMAIL PROTECTED]>, David Hopwood proposes an
algorithm for recovering the DSS private key from a small number of
signatures (without the public key) assuming that q is small enough that
square root attacks work:
> 6. Construct a table with entries (j, g^j mod p mod q), for
> j := 0..m-1. Sort this table by second component.
> (Alternatively, use conventional hashing on the second component
> to store the entries in a hash table.)
>
> 7. Compute g^-m mod p mod q and set gamma := beta_0.
>
> 8. For i := 0..m-1,
> 8.1. Check if gamma is the second component of some entry in the
> table.
> 8.2. If gamma = g^j then (im + j) is a candidate for log_g(y^u2_0).
> Let x' = (im + j) * u2_0^-1 mod q, and check whether this is
> the correct value for x by testing whether
> g^(x' * u2_i) = beta_i for each i > 0; if these checks pass,
> stop.
> 8.3. Set gamma := gamma * g^-m mod p mod q.
The problem is that "mod p mod q" does not define a group structure. In
fact if a = b mod p mod q and c = d mod p mod q then it is not necessarily
true that a*c = b*d mod p mod q. For example let p=7 and q=3, then
1 = 4 mod 7 mod 3 but 2*1 = 2 <> 2*4 = 1 mod 7 mod 3. Hence there is
no group and the discrete log is not even well defined. The mathematical
structure required by the discrete log algorithms (including baby step
giant step) is absent.
The above algorithm assumes that g^(a+b) mod p mod q = (g^a)*(g^b)
mod p mod q, which is not true.
Actually the algorithm doesn't assume it is always true, but it seems to
assume that there is a good chance that it will be true. But with p >> q
as is normally the case, g must be on the order of p and so going from
g^k mod p mod q to g^(k+1) mod p mod q will involve significant modular
reductions in both the mod p and mod q step, destroying any multiplicative
structure in the result.
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Questions about message digest functions
Reply-To: [EMAIL PROTECTED]
Date: Mon, 10 Jan 2000 20:23:35 GMT
Shawn Willden <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> However, if the hash size is small - as it will sometimes be practially
:> constrained to be - a PRF is simply completely the wrong model for an
:> ideal hash, if you care at all about avoiding hash collisions.
: What is the correct model?
I wrote previously in this thread something like:
``Recall that I claim that each hash value should occur
with as near-equal frequency as possible given the
target distribution of message files to be hashed.
A classical pseudo-random function would produce something
more like a bell-shaped distribution.''
That seems to pretty much sum it up. Hashed values should occur
with as near equal frequency as is possible.
This statement is not at all true of the distribution of a pseudo-random
function, when the message size approaches the hash size.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
Smile if you're wearing sexy underwear.
------------------------------
From: "Gary" <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 20:53:10 -0000
I've always wondered if 100% 1-1 compression is impossible for all domains
and ranges.
Given that compressed data has two streams 1) non redundant data and 2)
encoded redundant data.
Take any random data that when looked at as compressed data has redundancy
in the non redundant stream. Surely when this is decompressed and then
recompressed it will be more compressed than the original.
This implies that only by using redundancy as and indication of redundancy
can 1-1 compression be achieved.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 22:36:26 GMT
In article <[EMAIL PROTECTED]>, Daniel Vogelheim
<[EMAIL PROTECTED]> wrote:
>Hallo Tim,
>Hello group,
>
>>: Toying with one of the provided implementations for a few minutes
>>: supported this (http://members.xoom.com/ecil/ah2.zip, h2com.exe
>>: version 19990825, h2unc.exe version 19990922). One can construct
>>: multiple files that get "compressed" to the same output file. One such
>>: file is included in this message below (uuencoded).
>>
>>If someone else doesn't beat me to it I'll look at this and report back.
>
>Please do, that's why I posted.
>
>>One idea behind such a scheme is essentially that if the EOF occurs while
>>the decompressor is in the middle of a symbol, it *knows* that this can
>>only happen if the decompressor chopped of a tail of zeros. This tail of
>>zeros can be unambiguously reconstructed *provided* the file does not end
>>with any all-zero Huffman symbols - and this case can be avoided fairly
>>simply.
>
>Aah, thanks, I get it. The scheme is to do zero-padding and prevent an
>all-zero token (of length<=8). Then my claim was wrong and I apologize
>for that; mea culpa. (I was probably confused by the web page, which
>turns this into five rules, and talks about the all-zero-token and
>chopping off ones, which I found more than strange.)
>
>Anyhow, the files I downloaded from Scott's web page still don't
>compress/decompress my example correctly, so I'd assume I found a bug
>rather than a conceptual flaw.
>
did you try version 20000110 they should handle this.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
"The road to tyranny, we must never forget, begins with the destruction of the
truth."
------------------------------
From: [EMAIL PROTECTED]
Subject: newbie question
Date: Mon, 10 Jan 2000 21:37:08 GMT
I'm writing a shareware registration routine that simply takes the
output from an previously encrypted text value and compares it with a
newly encrypted text value to see if the key the user enters is valid.
I would encrypt their name and send the encrypted version of their name
to them for the program to compare and unlock features if the key was
found to be valid.
In my routine, I do a Blowfish ECB encryption and then run the result
of that through a Base64 routine to get a key that can be entered via
the keyboard. Is this an acceptable way to handle the problem? I read
in a previous post that I might want to base the key on a hash rather
than an encryption. Why would a hash be better than an encryption? Is
it because the output of a hash isn't expected to be de-hashed like
encrypted output? Thinking about it, I guess I really won't need to
ever decrypt the key so should I use a hash instead? Is a simple XOR
operation good enough?
Thanks.
- Alex Olshove
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: newbie question
Date: Mon, 10 Jan 2000 21:37:12 GMT
I'm writing a shareware registration routine that simply takes the
output from an previously encrypted text value and compares it with a
newly encrypted text value to see if the key the user enters is valid.
I would encrypt their name and send the encrypted version of their name
to them for the program to compare and unlock features if the key was
found to be valid.
In my routine, I do a Blowfish ECB encryption and then run the result
of that through a Base64 routine to get a key that can be entered via
the keyboard. Is this an acceptable way to handle the problem? I read
in a previous post that I might want to base the key on a hash rather
than an encryption. Why would a hash be better than an encryption? Is
it because the output of a hash isn't expected to be de-hashed like
encrypted output? Thinking about it, I guess I really won't need to
ever decrypt the key so should I use a hash instead? Is a simple XOR
operation good enough?
Thanks.
- Alex Olshove
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 14:48:34 GMT
Mok-Kong Shen <[EMAIL PROTECTED]> wrote, in part:
>What I don't yet understand is how one could use a 'probabilistic'
>argument here. One has to take care of ALL possible cases (that
>can occur) and not the most likely ones, nor even 'all but excepting
>one case'. Or have I missed something essential?
Yes, one has to take care of all cases.
It's just that we would like the output from compression to be random:
each bit in it should have a 50% probability of being a 0 or a 1, for
the range of typical plaintext inputs.
I can achieve that if I don't have to go to byte boundaries. I can
achieve that if I'm allowed to use random padding with a length
indicator. But trying to do it David Scott's way, that condition can
no longer be achieved (well, I can always XOR my last byte with a
checksum of the rest of the message to at least mask the bias...).
John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: AES & satellite example
Date: Mon, 10 Jan 2000 14:43:57 GMT
[EMAIL PROTECTED] (DJohn37050) wrote, in part:
>I am writing a paper for 3rd AES and remember someone discussing in sci.crypt
>that a reason for having multiple AES winners were situations where hardware
>was used but was not able to be updated, as in satellites, Does anyone
>remember who said that?
>I want to give proper attribution.
Well, there was a post in the thread "Re: 'Risks of Relying on
Cryptography'" where Terry Ritter noted that a satellite should have
multiple ciphers on board, so that if one was found to be vulnerable,
one of the others could be used immediately.
This was a reply to a post by Paul Crowley, where he suggested that in
the unlikely event of such a thing happening, the satellite's software
could be updated from the ground.
There was no specific reference to the AES process in that post,
however.
In part, he said:
>In the satellite example, adding fixes to the crypto may be possible,
>if there are a limited number of ground stations communicating with
>the satellite.
>But when a cipher is to be used by many, to change it is to introduce
>incompatibilities, delays, and vast costs. And during the change-over
>delay we have the poor situation of no acceptable cipher at all.
>To avoid this, any cipher system designed for mass use should be built
>so that alternate ciphers can be used. And to reassure ourselves that
>this feature remains available, it should be used regularly.
John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Intel 810 chipset Random Number Generator
Date: Mon, 10 Jan 2000 21:45:13 GMT
In article <[EMAIL PROTECTED]>,
"Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
> Because the example for a multi-threaded app works when applied at
the OS level. To the OS the applications look like threads. The fact
that some share address spaces (are threads within a single app)
doesn't change the model for the OS. If the OS provides a semaphore
for controlling access to the RNG any number of apps can safely share
the resource.
Ok, fine, but there are a zillion other ways to generate random
numbers at the OS level. I am not planning on going into the
OS business, and I don't know anyone who is. Intel's literature
talks about making random numbers available to apps. Microsoft
may not be interested in implementing it. It would have been
nice is Intel had supplied something more convenient. As it is,
reading the microphone is more convenient.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Anyone give any pointers?
Date: Mon, 10 Jan 2000 21:46:41 GMT
Well it sounds like you seem to "know" quite a bit about this encryption
format. Often I find that the assumptions I make when decrypting things
are not always 100% correct the first time...
However, to answer one of your questions, one of the main reason to
only xor 1bit/byte is to reduce the amount of encrypted data that
is exposeable to attack. Suppose you needed 65535 terms to find
the repeating sequence (16 bit PRNG). If you used only the 8lsbs
of this PRNG to XOR you would discover it in a 65535 byte file.
If you only use 1bit/byte you could have a 524K file to find the
repeat...
Maybe these guys aren't as dumb as you think... ;-)
In any event, this is clearly a case of security through obscurity...
Reverse engineering the player is probably the best way to go. If
you had the original and the decrypted stream, you would just XOR
them and have a table of xor... Then you could take stuff like the
shift amount and the cycle length to reverse engineer the sequence.
However, the easiest thing to do is just hack the player and look
for the subroutine that does the decryption... If I'm not mistaken,
that's pretty much how DVDs got cracked...
But don't do anything illegal...
-slew
BTW, this is the way most encryption schemes get cracked. Secure
encryption has the algorithm advertized and the only unknown quantity
is the key. Then the problem reduces to the security of the key.
(such as hiding it or making it so long so cracking is computationally
unfeasible). Hiding the algorithm itself helps too, but only if it's
a good algorithm to start with... (but ususally people hide algorithms
because they aren't very good algorithms)...
In article <[EMAIL PROTECTED]>,
"Derek" <[EMAIL PROTECTED]> wrote:
> I'm trying to determine how a file is encrypted... I know the file is
> stored as 256-byte records using sequential access. I also know the
file is
> enrypted using XORs on a per-byte basis, and each byte of the key has
only
> one bit set. Some files are encrypted while others are not, so I know
the
> underlying file format. I also know that the keys are the same at the
same
> byte position for every encrypted file -- but so far, I can't
determine that
> any it repeats anywhere, so it sounds like some sort of a randomly
generated
> key based the on record number or byte position. But I can't seem to
make a
> connection in the pattern -- how they're coming up with these key
bytes.
>
> The folks who engineered this file format are well-known for taking
the
> quickest and easiest route to implementation -- but the fact that they
only
> XOR one bit out of every byte confuses me. (It would seem easier to
just
> XOR the random number 0-255). I'm a complete amatuer at cryptography,
but I
> was hoping this would ring a bell as something simple that's
well-known,
> because if they treated this like everything else they do, they
certainly
> wouldn't have spent much time coming up with something very strong.
>
> Thanks for any input you could provide.
>
> Derek
>
>
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Questions about message digest functions
Date: Mon, 10 Jan 2000 21:59:05 GMT
Tim Tyler wrote:
> [EMAIL PROTECTED] wrote:
> : Tim Tyler wrote:
> :> lordcow77 wrote:
>
> :> [...]
> :> a hash should approximate a pseudo-random function?
> :>
> :> I don't care *how* "simple" and "widely accepted"
> :> it is - it's dead wrong.
>
> : The wide-acceptors have a reason: we can't find
> : functions better than random.
>
> That's not entirely true. I could present one without
> much difficulty.
The one you suggest, quoted below, is a
spectacular failure.
> Recall that I calim that each hash value should occur
> with as near-equal frequency as possible given the
> target distribution of message files to be hashed.
>
> A classical pseudo-random function would produce something
> more like a bell-shaped distribution.
>
First, to be have any meaning at all you have to
say what random variable you think to have a
bell-shaped distribution. Second for the
meaningful statistics that we could use, it's the
variance that indicates how nearly-equally
frequent the digest are, not the shape.
[...]
> : The improvement against brute force is entirely trivial, and seems
> : to come at a terrible cost.
>
> Hang on. There *is* no "terrible cost".
Your suggestion below requires memory for 2^160
digests. There isn't that much memory yet made, or
even 2^80 for that matter.
> Personally I doubt that building a one-way hash function with the
> characteristics I believe are desirable would be /that/ difficult.
> All that would probably be required is some clear thinking on the
> matter.
So did you do any clear thinking about it? How
did it turn out?
> :> For /many/ applications hashes should make
> :> finding two messages which hash to the same
> :> value as difficult as possible. Alternatively
> :> (and perhaps more commonly) they should make
> :> finding a second message with the same
> :> hash as a given message as difficult as possible.
>
> : There's another property generally required of
> : hash functions: a hash should be "preimage
> : resistant", that is one-way.
>
> Firstly, not all hashes need to be one way.
The _only_ example you described where your hash
has a perceptible advantage is one where the
digest space is at lease as large as the message
space, and you only need collision resistance.
That's a nonsense example because it doesn't call
for any hash at all - just use the message.
[...]
> : Even for modest length messages, just
> : a few bytes longer than the digest, random
> : functions will map preimages to digest so evenly
> : that the expected time for brute-force search will
> : differ by an tiny fraction of a percent,
> : compared to a hash that maps exactly the same
> : number of preimages to each digest. [...]
>
> That's probably fair enough. My objections are only really
> very significant when the size of the message is small.
I suggest you find out why hashes are used. Your
example is of no significance.
[...]
> When the messages to be hashes are much longer than the hash, the
> hashes will still follow a bell-shaped curve, rather than being a
> near-flat distribution.
You lost me. You say the distribution function is
bell-shaped for a random function, and flat for a
function in which each digest has the same number
of preimages? What random variable are talking
about?
>
> : But here's the problem - how do you create this
> : pseudo-random permutation so that's it's as strong
> : as pseudo-random-function hashes such as SHA-1?
> : Can you describe a permutation on 160-bit vectors
> : (or even an almost-permutation) that is easy to
> : compute but takes anywhere near 2^160 steps to
> : invert?
>
> You could use one /very/ big table representing
> the random permutation. 2^160 is big, but not so
> big that it's totally unmanagable.
Look, I agree lordcow was overly rude to you in
his initial post here, but if you're going to
write stuff like that, what do you expect?
[...]
> [snip math]
>
> : The ratio of time to break by brute-force is about
> : 255/256.
>
> I find no fault with this math. For a message n bits bigger than the
> hash, the ratio will be roughly (2^n - 1) / 2^n.
>
> : There is no point to optimizing against brute force search.
>
> Huh? This doesn't seem to logically follow.
Then go build your 2^160 entry table and enjoy
your negligible advantage against exhaustive
search.
There's some interesting insight to be gained by
comparing how one might build a one-way
permutation versus a one-way function. But if you
have no handle on what can be done and what can't,
then all such insight is lost.
--Bryan
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (Johnny Bravo)
Subject: Re: Cryptography in Tom Clancy
Date: Mon, 10 Jan 2000 17:10:25 GMT
On Sat, 08 Jan 2000 14:33:07 -0700, Shawn Willden <[EMAIL PROTECTED]>
wrote:
>> It is possible, the first key tried could decrypt the message.
>
>Yes, with p = 1/(2^128). It would be a very poor plot that turned on such an
>extreme coincidence. Not Clancy's style.
I never said it made for good drama. Every prolific writer usually
has entire books they wish were never written, much less small
elements. :)
>> The groaner is that they even tried to brute force a 128 bit key. Not
>> unless the security of the nation or an entire city is at risk, then
>> they don't have much to lose on such a long shot.
>
>A loooooooooooonnnnnnnnnngggggggg shot.
Hey, if you were in a city that was about to be nuked in 20 minutes
and if you decrypted a 128 bit message it saves your life... :)
>> If he wrote that they routinely break 128 bit keys in three hours,
>> then that is another story. <g>
>
>That was clearly the idea. It wasn't a last-chance, desperate measure.
Not a good thing. But maybe Clancy hasn't studied encryption as
well as some of his other subjects.
Best Wishes,
Johnny Bravo
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Mon, 10 Jan 2000 22:31:07 GMT
>Hello all,
>
>
>after following some recent discussion in this newsgroup, I looked at
>the "1:1 adaptive huffman compression" as described on page
>http://members.xoom.com/ecil/dspaper.htm. It describes 5 rules about
>how to cut off the last bits of the message in order to achieve
>byte-sized output. I think this idea is fundamentally flawed, as
>simply discarding information should make reconstruction impossible in
>general. This means the uncompressor will fail to reconstruct the
>original file in some of the cut-off cases. I.e., it just won't work!
>
>Toying with one of the provided implementations for a few minutes
>supported this (http://members.xoom.com/ecil/ah2.zip, h2com.exe
>version 19990825, h2unc.exe version 19990922). One can construct
>multiple files that get "compressed" to the same output file. One such
>file is included in this message below (uuencoded). Compressing this
>file and then uncompressing the result does not reconstruct the
>original file. The "uncompressed" file is missing the last byte (whose
>huffman symbol presumably got cut off). Said "uncompressed" file gets
>compressed to the same file as the original file.
>
>Comments?
>
>
>Sincerely,
>Daniel Vogelheim
>
Way to go. You found a bug. I have tested thousands of files and never had
a problem. It was a very clever test file. When the symobl "e" was entered at
the key position of the file. this left the tree in a state where one of the
symbols was "000" My code was suspose to create trees where the longest
path in tree for the 256 symbols is all zeros and the shortest is all ones.
The neat thing about this example is that the tree is never touched again
after the symbol "e" is introduced and the symbol that got coded as "000"
finally rotates to correct postion in byte stream to cause trouble if the file
ends.
I have corrected this in the 20000110 version of h2com.exe and h2unc.exe
you can FTP it from http://members.xoom.com/ecil/ah2.zip
If you find a problen with the next version let my know. I have about 20 test
files and tested the new one against them. But my 486 is dam slow.
The correct fix is in the routine SCOTT that is common all the huffman file
programs. So the error will affect most of the others. I will update the ones
I still have at my site. It is easy to spot the patch i use the variable "dv"
in a for loop.
AH2AF.ZIP is not affected since it uses a better method. It is easy to
optimally stop huffman bit streams if you only care about bits. One way
to do it. The way in AH2AF.ZIP is look at what happens if you go down
a tree and STOP at an interneal NODE. if you do this and use the rule
when I stop at ZERO follow ONE's path to leaf and use it for last symbol
if I stop at a ONE follow the ZERO's path to the leaf and use that for last
symbol. SO if you stop on a series of ones and zeros arbitrary and not
at leaf you can define last sybol as partial symbol. The only question is
how do you hadle last symbol that are all ONES or all ZEROES. The way
I did it was to say if you end bit stream at the end of leaf for a mixed leaf
not all ZEROes or ONES you pretend the symbol with all ones is
the real last symbol and you code nothing for it. However the last symbol
is and all ZEROS symbol or and all a stream of alternatin ALL ZEROES
and ALL ONES then what you get is what you see. If an all ones symol
is at end of stream or start of file you code nothing for it. You code it
only if not last symbol or in a steam of the ALL zero symbol or the all ones
symbol. I knew this years ago but my problem was that I had no easy
way to go from this to 8 bit byte files. So I evovled the first apporach.
When Matt Timmermans found a way using the finitely odd file to bite
file that unlocked the key to do it for blocks of any size. Yet when he
impliment the methed in huffman we get yet another approach his apporach
was that is file end in a symbol that had ones and zeros you write all zeros
and treat it as a finitlely odd file. If it ends in a symbols of all zeros or
all zeros mixed with the symbol that as a lead 1 with rest zeros 100.. you
write a 100.. as the last symbol. THese my not sound the same but all
3 methods work and are essentail the same.
Example
a=0000
b=0001
c=001
d=010
e=011
f=1
my last method where I truncate and add 1 to mark eof
for last symobl
a =00001
b =0001 /*end at 000 1 added for EOF FOF marker trail ones or zeors cut off */
c =001
d =011
e=01
f=1
tims
a=00001
b=0001
c=001 /* looks the same up to here */
d=01
e=011
f=1
/* only d and e reversed but still 1-1 */
Another Example
a =0000
b=0001
c=001
d=010
e=011
f =10
g=11
mine leads to when used last
a=00001
b=0001
c=001
d=011
e=01
f=11
g=1
Tims lead to
a=00001
b=0001
c=001
d=01
e=011
f=1
g=11 still all the same symbols.
Anyway thanks for finding the error
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
"The road to tyranny, we must never forget, begins with the destruction of the
truth."
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************